yilunzhao commited on
Commit
4bbd4b8
·
verified ·
1 Parent(s): c1b7d8d

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240819/2408.09650v1.json +0 -0
  2. 20240819/2408.09878v1.json +0 -0
  3. 20240819/2408.09934v1.json +192 -0
  4. 20240819/2408.09958v1.json +237 -0
  5. 20240819/2408.10381v1.json +464 -0
  6. 20241127/2105.02653v3.json +447 -0
  7. 20241127/2201.11192v2.json +600 -0
  8. 20241127/2204.02688v2.json +0 -0
  9. 20241127/2206.09906v2.json +142 -0
  10. 20241127/2211.01974v3.json +60 -0
  11. 20241127/2211.15656v4.json +0 -0
  12. 20241127/2212.11143v4.json +0 -0
  13. 20241127/2212.11571v2.json +177 -0
  14. 20241127/2305.19353v5.json +516 -0
  15. 20241127/2307.00319v4.json +0 -0
  16. 20241127/2307.14132v4.json +126 -0
  17. 20241127/2310.01522v3.json +210 -0
  18. 20241127/2310.11083v2.json +0 -0
  19. 20241127/2311.10270v5.json +139 -0
  20. 20241127/2401.15479v4.json +489 -0
  21. 20241127/2402.14244v2.json +316 -0
  22. 20241127/2402.14708v2.json +509 -0
  23. 20241127/2403.05441v3.json +0 -0
  24. 20241127/2403.14494v2.json +0 -0
  25. 20241127/2403.16790v2.json +417 -0
  26. 20241127/2404.00345v2.json +0 -0
  27. 20241127/2404.05779v2.json +0 -0
  28. 20241127/2404.08402v2.json +84 -0
  29. 20241127/2404.11161v3.json +0 -0
  30. 20241127/2405.05160v2.json +0 -0
  31. 20241127/2405.11828v2.json +0 -0
  32. 20241127/2405.17472v2.json +0 -0
  33. 20241127/2405.19644v3.json +185 -0
  34. 20241127/2406.03095v4.json +316 -0
  35. 20241127/2406.14753v3.json +0 -0
  36. 20241127/2406.17995v2.json +0 -0
  37. 20241127/2406.19226v2.json +0 -0
  38. 20241127/2406.19540v2.json +115 -0
  39. 20241127/2407.03263v2.json +0 -0
  40. 20241127/2407.03297v2.json +523 -0
  41. 20241127/2407.04127v3.json +609 -0
  42. 20241127/2407.05784v2.json +0 -0
  43. 20241127/2407.11413v2.json +131 -0
  44. 20241127/2408.06157v4.json +0 -0
  45. 20241127/2408.07401v2.json +0 -0
  46. 20241127/2408.10511v3.json +184 -0
  47. 20241127/2408.11841v2.json +0 -0
  48. 20241127/2408.12957v3.json +320 -0
  49. 20241127/2408.14776v2.json +0 -0
  50. 20241127/2408.17175v3.json +622 -0
20240819/2408.09650v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2408.09878v1.json ADDED
The diff for this file is too large to render. See raw diff
 
20240819/2408.09934v1.json ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Human Mimetic Forearm Design with Radioulnar Joint using Miniature Bone-Muscle Modules and Its Applications",
3
+ "abstract": "The human forearm is composed of two long, thin bones called the radius and the ulna, and rotates using two axle joints.\nWe aimed to develop a forearm based on the body proportion, weight ratio, muscle arrangement, and joint performance of the human body in order to bring out its benefits.\nFor this, we need to miniaturize the muscle modules.\nTo approach this task, we arranged two muscle motors inside one muscle module, and used the space effectively by utilizing common parts.\nIn addition, we enabled the muscle module to also be used as the bone structure.\nMoreover, we used miniature motors and developed a way to dissipate the motor heat to the bone structure.\nThrough these approaches, we succeeded in developing a forearm with a radioulnar joint based on the body proportion, weight ratio, muscle arrangement, and joint performance of the human body, while keeping maintainability and reliability.\nAlso, we performed some motions such as soldering, opening a book, turning a screw, and badminton swinging using the benefits of the radioulnar structure, which have not been discussed before, and verified that Kengoro can realize skillful motions using the radioulnar joint like a human.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "In recent years, development of the humanoid is vigorous.\nThe humanoid, beginning with the ASIMO [1 ###reference_b1###], has two arms and two legs, and can move and walk like a human.\nThe development of not only the humanoid, but of the tendon-driven musculoskeletal humanoid, which is based on various parts of the human body, is also vigorous [2 ###reference_b2###, 3 ###reference_b3###].\nThe tendon-driven musculoskeletal humanoid is based on not only the body proportion but also the joint structure, drive system, and muscle arrangement of the human body, and is used to analyze human motion and to achieve human skillful motion.\nOf these studies, there are many which duplicate the human joint structure.\nAsano, et al. duplicates the human screw home mechanism, and discusses the achievement of motion using this structure [4 ###reference_b4###].\nAlso, Sodeyama, et al. discusses the design of the upper limb using the clavicle and scapula [5 ###reference_b5###].\nLike so, there are many studies that integrate structures specific to humans with humanoids.\n###figure_1### ###figure_2### On the other hand, there are few studies which discuss the human specific radioulnar joint structure.\nSome examples of humanoids with a radioulnar joint are [6 ###reference_b6###, 7 ###reference_b7###], but these are made of pneumatic actuators that are easy to arrange but have poor controllability, or are unable to arrange the number of muscles needed to achieve many DOFs in the forearm.\nThe conventional method of installing the muscle modules such as [8 ###reference_b8###, 9 ###reference_b9###] to the structure excels in maintainability and reliability, and includes electric motors, which have better controllability.\nHowever, we need to miniaturize the modules or propose other approaches in order to achieve many DOFs without deviating from the human body proportion, because the conventional muscle modules are large in size, and need other wasteful structures to function.\nAdditionally, the muscle arrangements, the proportion of the forearm, and the benefits of the radioulnar structure are not discussed at all in previous studies.\nThus, in this study, we conduct research about the development of a forearm with a radioulnar joint based on the proportion, weight ratio, muscle arrangement, joint structure, and joint performance of the human body, and about the motions that use its structure skillfully.\nThen, we developed a new miniature bone-muscle module, which integrates a muscle module with the structure.\nBy using this miniature bone-muscle module, we can achieve the human mimetic forearm with a radioulnar joint while keeping many DOFs, maintainability, and reliability.\nThen, we succeeded in achieving human skillful motion, which makes the best use of the radioulnar structure, but has not been discussed before.\nIn Section I ###reference_###, we explained the motive and goal of this study.\nIn Section II ###reference_###, we will explain the development and performance of the miniature bone-muscle module necessary for the forearm with a radioulnar joint.\nIn Section III ###reference_###, we will explain the achievement of the radioulnar structure using miniature bone-muscle modules, and evaluate the degree of imitation.\nIn Section IV ###reference_###, we will discuss experiments of soldering, opening a book, turning a screw, and badminton swinging as examples of human skillful motion that use the benefits of the radioulnar joint.\nFinally in Section V ###reference_###, we will state the conclusion and future works."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Development of Miniature Bone-Muscle Module",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Approach to Miniature Bone-muscle Module",
21
+ "text": "The human forearm is composed of two long, thin bones.\nThese bones are called the radius and the ulna, and the radioulnar structure is composed of these two bones and two axle joints located at the proximal and distal.\nHowever, the actualization of the radioulnar structure is not easy.\nWe have developed tendon-driven musculoskeletal humanoids such as Kojiro [10 ###reference_b10###], Kenzoh [7 ###reference_b7###], and Kenshiro [11 ###reference_b11###], but these were unable to completely realize the radioulnar joint, radiocarpal joint and interphalangeal joints.\nThis is due to the arrangement of muscles.\nConventionally, the body of the tendon-driven musculoskeletal humanoid is made by installing muscle modules with actuators, sensors, and circuits to the bone structure.\nFor example, there are muscle modules such as Kengoro\u2019s module[8 ###reference_b8###] and Anthrob\u2019s module[9 ###reference_b9###].\nThis method of installing muscle modules is very effective from the viewpoint of maintainability, reliability, and versatility.\nHowever, because the radioulnar structure is composed of two long, thin bones, if we install muscle modules to the bone structure, the forearm will be out of proportion, and it will be very difficult to imitate the human body in detail using many muscle modules.\nThus, we developed a new miniature bone-muscle module.\nWe succeeded in developing this muscle module using the two strategies shown below.\nIntegration of Muscle and Bone\nThis muscle module includes two actuators.\nThis approach creates space among the two motors, and we are able to make use of this space.\nAlso, the benefit of utilizing common parts for the two muscles is big in saving space.\nWe can arrange parts of the bone structure in this space.\nThus, this muscle module integrates muscle actuators to the bone structure, allowing compact arrangement without wastefully separating the structure from the muscle modules.\nAdoption of Miniature Motors and Heat Dissipation by Adherence between the Muscle and Structure\nIt is easiest to use small motors as muscles in order to make muscle modules compact.\nHowever, it is not a good idea to equip a high gear ratio motor for high torque, considering the backdrivability and efficiency.\nAdditionally, miniature muscle motors heat up easily.\nTo compensate for such drawbacks of adopting miniature motors, this module can keep continuous high tension by dissipating the muscle heat to the structure through a heat transfer sheet.\nThrough these approaches, we propose that we can actualize the radioulnar structure based on the body proportion, weight ratio, and muscle arrangement of the human body by simply connecting the muscle modules linearly, which can act as not only the muscle but also as the structure.\nIn related works, for an ordinary robot, the integration of frameless motors into the structure is being developed as adopted in TORO [12 ###reference_b12###].\nAdditionally, we aim to develop high maintainability and reliability of the module by packaging motor drivers, sensors, and cables, like the sensor-driver integrated muscle module [8 ###reference_b8###].\nAt the same time, by preparing versatility in the arrangement of muscle modules, we propose that we can use this module for not only radioulnar joints, but also for all next-generation tendon-driven musculoskeletal humanoids."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Development Details of Miniature Bone-muscle Module",
27
+ "text": "The details of the miniature bone-muscle module are shown in Fig. 2 ###reference_###.\nThe motor is a brushless DC motor, and we use 84:1 or 157:1 as the gear ratio of the motor depending on the muscle.\nThe wire is Dyneema and is winded up by the pulley.\nThe cables from the load cell of the tension measurement unit, temperature sensor attached to the motor, and hall sensor of the motor are all connected to the motor driver, and a cover protects these cables and circuits, increasing operational stability.\nWe especially would like to discuss three topics.\nFirst, \u201cSupport of bone\u201d and \u201cBase of bone\u201d become the bone structure, enabling the use of the muscle module as the structure.\nThus, we are able to connect the muscle modules lengthwise and crosswise as the structure, eliminating waste.\nSecond, this module can dissipate heat to the structure through the heat transfer sheet between \u201cBase of bone\u201d and the two motors.\nAs a result, the module can realize comparatively continuous high muscle tension even if the motor is miniature and the gear ratio is 84:1 or 157:1, which we can backdrive.\nFinally, we developed an ultra tiny tension measurement unit.\nWe can use space effectively by arranging the load cell, which defines the size of the unit, vertically.\nWe succeeded in decreasing the volume to 61 compared to the old tension measurement unit [8 ###reference_b8###].\nThe size of this unit is [] and is designed to measure tension until 56.5 [kgf]."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Evaluating Performance of Miniature Bone-muscle Module",
33
+ "text": "First, we compare the size, weight, maximum muscle tension, and so on, between the newly developed miniature bone-muscle module and the conventional muscle module [8 ###reference_b8###].\nThe result of the comparison is shown in Table I ###reference_###.\nSince the module developed in this study has two muscle actuators inside one module, and the size and performance of the motors are different between the two modules, a simple comparison cannot be done.\nHowever, the module developed in this study was able to double the number of muscle with only a 21 increase in volume.\nSecond, we discuss the versatility of the miniature bone-muscle module.\nA characteristic of this muscle module lies in the integration of the muscle and structure, but we must not lose freedom of design of the robot through modularization.\nThus, this muscle module is designed in a way that makes it possible for the ultra tiny tension measurement units to be arranged in various directions and positions, as shown in the left of Fig. 3 ###reference_###, to gain freedom in muscle arrangement.\nThe connection among modules can also be arranged in various ways as shown in the right of Fig. 3 ###reference_###, and we can create various designs using the muscle module as the structure.\n###figure_3### Third, we discuss the ability of the ultra tiny tension measurement unit.\nThe principle of tension measurement is shown to the left of Fig. 4 ###reference_###, and we will discuss the balance of moment around the shaft.\nIn this study, we aim to measure muscle tension until 50 [kgf] , and set as 5.0 [mm], as 5.0 [mm], and as 11.3 [mm].\nBy these settings, this tension measurement unit can measure tension until 56.5 [kgf] because the tension limit of the load cell is 50 [kgf] as shown in the equation below.\nThe result of calibration is shown as the right of Fig. 4 ###reference_###, and proves that the unit can correctly measure muscle tension until 56.5 [kgf].\n###figure_4### Fourth, we discuss the effects of suppressing the rise in temperature by dissipating motor heat to the structure.\nIn this experiment, we lifted 20 [kgf] and 40 [kgf] using the muscle module, with and without insertion of the heat transfer sheet between the motor and the structure, and showed the rise in motor temperature graphically.\nWe measured the temperature of the motor outer cover using the temperature sensor, and the results are shown in Fig. 5 ###reference_###.\nWe can see the big suppression effect of the rise in muscle module temperature as shown in Fig. 5 ###reference_### by the dissipation of motor heat to the structure.\nThis indicates that the module is able to exhibit continuously high muscle tension.\n###figure_5### Finally, we attempted to dangle Kengoro on a bar with the newly developed forearm, explained in the next section, to show that the newly developed miniature bone-muscle module functions correctly.\nWe made Kengoro take the posture of dangling, fixed the muscle length, and made Kengoro dangle as shown in the right of Fig. 6 ###reference_###.\nKengoro weighs 56 [kgf], and dangles using mainly the four left and right fingers.\nThe result of muscle tension and temperature for 5 minutes is shown to the left of Fig. 6 ###reference_###.\nThe tension of the muscles that actuates the fingers is 15\u201330 [kgf], and this temperature almost does not increase at all.\nThrough this experiment, we showed the strength of the miniature bone-muscle module and its effect in inhibiting the rise of temperature.\n###figure_6###"
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III Development of Human Mimetic Forearm with Radioulnar Joint",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-A Human Radioulnar Structure",
45
+ "text": "A human forearm is structured as shown in Fig. 7 ###reference_###.\nIt is composed of two long, thin bones called the radius and the ulna, and the radioulnar joint is formed by these bones and two axle joints located at the proximal and distal.\nIn an ulna, the proximal is thick and the distal is thin, but in a radius, the proximal is thin and the distal is thick.\nThis radioulnar structure is one of the joints that are specific to humans, and we propose its characteristics as below.\nEven if the ulna is fixed to something completely, the radioulnar joint can move.\nThe radioulnar joint is clinoaxis, and the joint passes the little finger through the proximal radius and the distal ulna.\nThe radioulnar joint can disperse torsion by two long bones.\nAs for 1), we use this characteristic when we perform motions such as writing and soldering.\nWe can perform motions using 3 DOFs of the radioulnar joint and radiocarpal joint when stabilizing the arm by fixing the ulna to the table completely.\nAs for 2), we use this characteristic when we perform motions such as opening a door, turning a screw, and swinging a badminton racket.\nWhen we open a door, we propagate torque efficiently by bending the wrist joint to the ulna and matching the axis of the radioulnar joint to the door knob joint.\nWhen we swing a badminton racket, we maximize the speed of the racket head by increasing the radius of rotation in bending the wrist joint to the radius and keeping the racket head away from the radioulnar joint.\nAs for 3), this structure is effective for cabling and skin movements.\nWe propose that these structures play a part in performing human skillful motion, and that this benefit is utilized only by imitating the body proportion, weight ratio, and muscle arrangement of the human body.\nThus, we developed a human mimetic forearm with a radioulnar joint using newly developed miniature bone-muscle modules.\n###figure_7###"
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-B Realization of Human Mimetic Radioulnar Structure",
51
+ "text": "The developed forearm with a radioulnar joint is shown in Fig. 8 ###reference_###.\nIt is very compact, enabled by making most of the benefit that the miniature bone-muscle module is able to connect lengthwise and crosswise to form the structure.\nTwo modules each are equipped in the radius and ulna, and the radius is almost completely composed of only modules.\nThere are 4 modules in total, and thus 8 muscles, in the forearm.\nThe radius is thick at the distal like that of a human, and connects to the hand [13 ###reference_b13###] through a universal joint.\nLikewise, the ulna is thick at the proximal, and connects to the humerus.\nTo rotate the radioulnar joint, spherical plain bearings are equipped in the proximal of the radius and the distal of the ulna as axle joints.\n###figure_8### ###figure_9###"
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-C Performance of Developed Forearm",
57
+ "text": "The muscle arrangement is shown in Fig. 9 ###reference_###.\nWe imitated 8 muscles in the human forearm, and there are 6 DOFs that are moved by the 8 muscles, including 1 DOF of the radioulnar joint, 2 DOFs of the radiocarpal joint and 3 DOFs of the fingers (thumb, index and middle, ring and little).\nIn these muscles, the gear ratios of , and are 84:1, and those of the others are 157:1.\nThe number of muscles can be an important index in expressing how much freedom the forearm has, and this forearm actualizes many more muscles compactly compared to other robots such as Anthrob [9 ###reference_b9###] (2 muscles), Kenshiro [11 ###reference_b11###] (0 muscles), and Kenzoh [7 ###reference_b7###] (5 muscles).\nAlso, we succeeded in imitating the human body without deviating from the human body proportion and weight ratio as shown in Fig. 10 ###reference_###.\nWe show the workspace and maximum torque of 4 DOFs of the elbow joint, radioulnar joint, and radiocarpal joint developed in this study in Table II ###reference_###.\nThis also indicates that the forearm is correctly based on the human body.\nThus, we succeeded in developing a forearm with a radioulnar joint, which has many degrees of freedom and is based on the body proportion, weight ratio, muscle arrangement, and joint performance of the human body.\n###figure_10###"
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "IV Achievement of Human Skillful Motion using Radioulnar Structure",
63
+ "text": "Due to the success in the development of a radioulnar structure based on the human body proportion, we propose that Kengoro is able to move in various ways using the benefits of this radioulnar structure.\nThus, we performed some human-specific motions using Kengoro [3 ###reference_b3###] equipped with the forearm having the radioulnar joint.\nIn this section, we will evaluate the degree of imitation of the forearm and verify the benefits of the radioulnar structure through experiments conducted on motion that uses the benefits described in the previous chapter, such as soldering, opening a book, turning a screw, and swinging a badminton racket."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-A Soldering",
69
+ "text": "The motion of soldering (Fig. 11 ###reference_###) is an example that effectively uses the characteristic that the radioulnar joint can move even with the ulna attached to something.\nWe can see that Kengoro is able to move the radioulnar joint stably with the ulna attached to the table.\nThis characteristic is thought to also be seen when writing and using a keyboard.\nTypically, large and strong structures are needed in order to make robots with high rigidity for stable hand movement.\nHowever, if the robot has a low rigidity, stable and fine movements can be done by having a radioulnar joint and moving the radioulnar and radiocarpal joints with the ulna bone attached to something.\nWe propose that this can support the drawback of being unable to do fine movements by the tendon-driven musculoskeletal humanoid, which has safe structures but low rigidity.\n###figure_11###"
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-B Opening a Book",
75
+ "text": "The motion of opening a book (Fig. 12 ###reference_###) is an example that effectively uses the characteristic that the radioulnar joint axis is slanting and passes through at about the little finger.\nWe can see that Kengoro is able to open a book by merely rotating the radioulnar joint, which becomes a motion like that of turning the palm.\nAlso, we can say that this extends the capacity of movement.\nFig. 13 ###reference_### is the comparison between an ordinary straight radioulnar joint and the slanting radioulnar joint of the reachable points of the center of the palm, that can be reached by only using the radioulnar and radiocarpal joints.\nThe slanting radioulnar joint can extend hand movement, and the hand can move widely and stably by combining this and the previous benefit that the radioulnar joint can move even with the ulna attached to something.\n###figure_12### ###figure_13###"
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "IV-C Turning a Screw",
81
+ "text": "When turning a screw with a screwdriver, Kengoro can transfer torque efficiently by matching the radioulnar joint axis to the axis of the screwdriver.\nWe can see that the tip of the screwdriver is hardly blurred.\nThe motion of opening a door uses the same principle.\n###figure_14###"
82
+ },
83
+ {
84
+ "section_id": "4.4",
85
+ "parent_section_id": "4",
86
+ "section_name": "IV-D Badminton Swing",
87
+ "text": "When swinging a badminton racket (Fig. 15 ###reference_###), Kengoro can increase the radius of rotation and speed in the racket head by keeping the hand away from the radioulnar joint.\nDue to the slanting radioulnar joint, Kengoro can have a larger radius of rotation than with the ordinary straight radioulnar joint.\nThis motion contrasts with the motion of turning a screw, and is a skillful human movement that uses the effects of the slanting radioulnar joint for speed of the swing instead of the torque.\nIn this study, we used the optimization method of [16 ###reference_b16###] to create the badminton swing motion, and made Kengoro move in this way.\nThe joint angle velocity of Kengoro during this motion is shown in Fig. 16 ###reference_###, and the speed of the radioulnar joint was the fastest.\nSpecifically, the slanting radioulnar joint increases the radius of rotation of racket by about 50 [mm] compared with the ordinary straight radioulnar joint, and the increase of the racket speed by the slanting joint is 0.35 [m/s] in contrast to the total racket speed of 8 [m/s], thus the effect is about 4.3 [%].\nThis is not a big effect, but shows that the radioulnar joint is important in competitive sports that require speed, and is very important to be used properly and skillfully.\n###figure_15### ###figure_16###"
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "CONCLUSION",
93
+ "text": "In this study, we explained the development of the human mimetic forearm with a radioulnar joint made by miniature bone-muscle modules.\nFirst, we explained the need for a forearm with a radioulnar joint that is based on the body proportion, weight ratio, and muscle arrangement of the human body in order to achieve human skillful motion.\nThen, we explained the need for the miniaturization of muscle modules to save space in order to actualize the human mimetic radioulnar joint of the tendon-driven musculoskeletal humanoid.\nTo approach this, we proposed the method of using space efficiently by installing two muscle actuators in one muscle module, integrating muscle and bone structure, and using a more miniature motor and solving its drawbacks by dissipating motor heat to the structure.\nWe succeeded in developing a forearm that is based on the body proportion, weight ratio, muscle arrangement, and joint performance of the human body using newly developed miniature bone-muscle modules.\nFinally, we conducted experiments on some motions using characteristics of the radioulnar joint, such as the ability to move with the ulna attached to something, and that the joint is slanting.\nThrough these experiments, we proposed the correctness of the approach in the human mimetic radioulnar joint with miniature bone-muscle modules, and observed the benefits of the radioulnar joint.\nFor future works, we propose the actualization of a small tendon-driven musculoskeletal humanoid made of the newly developed miniature bone-muscle modules.\nThese miniature bone-muscle modules can be used for the forearm, as well as various other parts of the tendon-driven robot.\nAt the same time, we aim to understand the biological meaning of the radioulnar joint, and find motions that use this joint that are more skillful."
94
+ }
95
+ ],
96
+ "appendix": [],
97
+ "tables": {
98
+ "1": {
99
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Comparison of newly developed miniature bone-muscle module and sensor driver integrated muscle module <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09934v1#bib.bib8\" title=\"\">8</a>]</cite>.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S2.T1.3\" style=\"width:641.5pt;height:146.7pt;vertical-align:-0.9pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-35.6pt,8.1pt) scale(0.9,0.9) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.3.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.4.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.4.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.4.1.2\">Miniature bone-muscle module in this study</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.3.3.4.1.3\">Sensor-driver integrated muscle module <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09934v1#bib.bib8\" title=\"\">8</a>]</cite>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S2.T1.1.1.1.1\">Module dimension []</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S2.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T1.3.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.5.1.1\">Module weight [kgf]</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.5.1.2\">0.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.3.5.1.3\">0.32</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.6.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.6.2.1\">Number of actuators</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.6.2.2\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.3.6.2.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.7.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.7.3.1\">Actuator</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.7.3.2\">BLDC-60W (changeable)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.3.7.3.3\">BLDC-120W (changeable)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.8.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.8.4.1\">Diameter of winding pulley [mm]</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.8.4.2\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.3.8.4.3\">12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.9.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.9.5.1\">Reduction ratio of actuator</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.9.5.2\">157:1 (changeable)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.3.9.5.3\">53:1 (changeable)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.10.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.10.6.1\">Continuous maximum winding tension [N]</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.10.6.2\">424</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.3.10.6.3\">338</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.11.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.11.7.1\">Winding rate with no load [mm/s]</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.3.3.11.7.2\">116</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S2.T1.3.3.11.7.3\">200</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
100
+ "capture": "TABLE I: Comparison of newly developed miniature bone-muscle module and sensor driver integrated muscle module [8]."
101
+ },
102
+ "2": {
103
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Comparison between joint performance of a human and that of Kengoro.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T2.4\" style=\"width:312.9pt;height:145.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-17.4pt,8.1pt) scale(0.9,0.9) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.4.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.1.1.1.2\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.3\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T2.1.1.1.1\">Human\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_t\" colspan=\"2\" id=\"S3.T2.1.1.1.4\">Kengoro</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.2.2.2.2\">Joint</th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T2.2.2.2.3\"></th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.2.2.2.4\">Torque</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S3.T2.2.2.2.5\">Workspace</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.2.2.2.1\">Torque\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.2.2.2.6\">Workspace</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.4.5.1\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.4.4.5.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.4.4.5.1.2\"></th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.5.1.3\">[Nm]</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r\" id=\"S3.T2.4.4.5.1.4\">[deg]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.5.1.5\">[Nm]</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.5.1.6\">[deg]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.4.6.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T2.4.4.6.2.1\">Elbow</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T2.4.4.6.2.2\">pitch</th>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S3.T2.4.4.6.2.3\">-72.5 \u2013 42.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_tt\" id=\"S3.T2.4.4.6.2.4\">-145 \u2013 0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S3.T2.4.4.6.2.5\">-49.9 \u2013 46.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S3.T2.4.4.6.2.6\">-145 \u2013 0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.4.7.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.4.4.7.3.1\">Radioulnar</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.4.4.7.3.2\">yaw</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.7.3.3\">-7.3 \u2013 9.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r\" id=\"S3.T2.4.4.7.3.4\">-90 \u2013 85</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.7.3.5\">-8.5 \u2013 3.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.7.3.6\">-85 \u2013 85</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.4.8.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.4.4.8.4.1\">Wrist</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.4.4.8.4.2\">roll</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.8.4.3\">-12.2 \u2013 7.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r\" id=\"S3.T2.4.4.8.4.4\">-85 \u2013 85</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.8.4.5\">-15.1 \u2013 14.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.8.4.6\">-75 \u2013 85</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.4.9.5\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.4.4.9.5.1\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.4.4.9.5.2\">pitch</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.9.5.3\">-11 \u2013 9.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_r\" id=\"S3.T2.4.4.9.5.4\">-15 \u2013 45</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.9.5.5\">-15.9 \u2013 13.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.9.5.6\">-15 \u2013 45</td>\n</tr>\n</tbody>\n<tfoot class=\"ltx_tfoot\">\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" colspan=\"6\" id=\"S3.T2.3.3.3.1\">\n <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09934v1#bib.bib14\" title=\"\">14</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.09934v1#bib.bib15\" title=\"\">15</a>]</cite>\n</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" colspan=\"6\" id=\"S3.T2.4.4.4.1\">\n simulated value</th>\n</tr>\n</tfoot>\n</table>\n</span></div>\n</figure>",
104
+ "capture": "TABLE II: Comparison between joint performance of a human and that of Kengoro."
105
+ }
106
+ },
107
+ "image_paths": {
108
+ "1": {
109
+ "figure_path": "2408.09934v1_figure_1.png",
110
+ "caption": "Figure 1: Forearm of Kengoro, composed of newly developed miniature bone-muscle module.",
111
+ "url": "http://arxiv.org/html/2408.09934v1/x1.png"
112
+ },
113
+ "2": {
114
+ "figure_path": "2408.09934v1_figure_2.png",
115
+ "caption": "Figure 2: Details of the newly developed miniature bone-muscle module.",
116
+ "url": "http://arxiv.org/html/2408.09934v1/x2.png"
117
+ },
118
+ "3": {
119
+ "figure_path": "2408.09934v1_figure_3.png",
120
+ "caption": "Figure 3: General versatility of the newly developed bone-muscle module. Left: various arrangements of ultra tiny tension measurement unit. Right: various connections of muscle modules.",
121
+ "url": "http://arxiv.org/html/2408.09934v1/x3.png"
122
+ },
123
+ "4": {
124
+ "figure_path": "2408.09934v1_figure_4.png",
125
+ "caption": "Figure 4: The principle of ultra tiny tension measurement unit. Left: the principle of tension measurement. Right: the result of calibration.",
126
+ "url": "http://arxiv.org/html/2408.09934v1/x4.png"
127
+ },
128
+ "5": {
129
+ "figure_path": "2408.09934v1_figure_5.png",
130
+ "caption": "Figure 5: Comparison of motor heat transition, with and without heat transfer sheet. 20 [kgf] and 40 [kgf] weights are lifted with the newly developed miniature bone-muscle module.",
131
+ "url": "http://arxiv.org/html/2408.09934v1/x5.png"
132
+ },
133
+ "6": {
134
+ "figure_path": "2408.09934v1_figure_6.png",
135
+ "caption": "Figure 6: Result of dangling. Left: overview of dangling motion. Right: muscle tension and temperature during the experiment.",
136
+ "url": "http://arxiv.org/html/2408.09934v1/x6.png"
137
+ },
138
+ "7": {
139
+ "figure_path": "2408.09934v1_figure_7.png",
140
+ "caption": "Figure 7: Structure of the human radioulnar joint.",
141
+ "url": "http://arxiv.org/html/2408.09934v1/x7.png"
142
+ },
143
+ "8": {
144
+ "figure_path": "2408.09934v1_figure_8.png",
145
+ "caption": "Figure 8: Overview of newly developed Kengoro forearm.",
146
+ "url": "http://arxiv.org/html/2408.09934v1/x8.png"
147
+ },
148
+ "9": {
149
+ "figure_path": "2408.09934v1_figure_9.png",
150
+ "caption": "Figure 9: Muscle arrangement of the newly developed forearm.",
151
+ "url": "http://arxiv.org/html/2408.09934v1/x9.png"
152
+ },
153
+ "10": {
154
+ "figure_path": "2408.09934v1_figure_10.png",
155
+ "caption": "Figure 10: Comparison of upper limb link length and weight between a human and Kengoro with a newly developed forearm.",
156
+ "url": "http://arxiv.org/html/2408.09934v1/x10.png"
157
+ },
158
+ "11": {
159
+ "figure_path": "2408.09934v1_figure_11.png",
160
+ "caption": "Figure 11: Kengoro soldering. Kengoro with a soldering iron can move the radioulnar joint with the ulna attached to the table.",
161
+ "url": "http://arxiv.org/html/2408.09934v1/x11.png"
162
+ },
163
+ "12": {
164
+ "figure_path": "2408.09934v1_figure_12.png",
165
+ "caption": "Figure 12: Kengoro opening a book.",
166
+ "url": "http://arxiv.org/html/2408.09934v1/x12.png"
167
+ },
168
+ "13": {
169
+ "figure_path": "2408.09934v1_figure_13.png",
170
+ "caption": "Figure 13: The reachable points of the center of the palm compared between the slanting radioulnar joint and the ordinary straight radioulnar joint. Left: x-y plain. Right: y-z plain.",
171
+ "url": "http://arxiv.org/html/2408.09934v1/x13.png"
172
+ },
173
+ "14": {
174
+ "figure_path": "2408.09934v1_figure_14.png",
175
+ "caption": "Figure 14: Kengoro turning a screw with a screwdriver. Upper picture shows that the radioulnar joint axis matches the screwdriver axis.",
176
+ "url": "http://arxiv.org/html/2408.09934v1/x14.png"
177
+ },
178
+ "15": {
179
+ "figure_path": "2408.09934v1_figure_15.png",
180
+ "caption": "Figure 15: Badminton swing motion. Upper pictures show comparison between the slanting radioulnar structure with large radius of rotation of racket and the ordinary straight radioulnar structure with small radius of rotation of racket.",
181
+ "url": "http://arxiv.org/html/2408.09934v1/x15.png"
182
+ },
183
+ "16": {
184
+ "figure_path": "2408.09934v1_figure_16.png",
185
+ "caption": "Figure 16: Joint angle velocity of badminton swing motion.",
186
+ "url": "http://arxiv.org/html/2408.09934v1/x16.png"
187
+ }
188
+ },
189
+ "validation": true,
190
+ "references": [],
191
+ "url": "http://arxiv.org/html/2408.09934v1"
192
+ }
20240819/2408.09958v1.json ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "AdaResNet: Enhancing Residual Networks with Dynamic Weight Adjustment for Improved Feature Integration",
3
+ "abstract": "In very deep neural networks, gradients can become extremely small during backpropagation, making it challenging to train the early layers. ResNet (Residual Network) addresses this issue by enabling gradients to flow directly through the network via skip connections, facilitating the training of much deeper networks. However, in these skip connections, the input (ipd) is directly added to the transformed data (tfd), treating ipd and tfd equally, without adapting to different scenarios. In this paper, we propose AdaResNet (Auto-Adapting Residual Network), which automatically adjusts the ratio between ipd and tfd based on the training data. We introduce a variable, , to represent this ratio. This variable is dynamically adjusted during backpropagation, allowing it to adapt to the training data rather than remaining fixed. Experimental results demonstrate that AdaResNet achieves a maximum accuracy improvement of over 50% compared to traditional ResNet.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In recent years, deep learning has revolutionized numerous fields, ranging from computer vision and natural language processing to autonomous systems and beyond. Among the various architectures that have emerged, ResNet (Residual Network) has played a pivotal role in advancing the state of the art in these domains [1 ###reference_b1###] [2 ###reference_b2###]. Its innovative design has enabled the training of extremely deep neural networks by addressing a critical challenge faced in traditional deep architectures: the vanishing gradient problem.\nAs neural networks become deeper, gradients can diminish significantly during the backpropagation process. This issue hampers the effective training of the early layers, causing the network to stagnate and preventing it from learning meaningful representations. ResNet tackles this problem by introducing skip connections [3 ###reference_b3###], which allow gradients to bypass intermediate layers and flow directly through the network. This mechanism facilitates the training of much deeper networks, making it possible to achieve unprecedented levels of accuracy and performance on complex tasks.\nDespite the success of ResNet, the standard implementation of skip connections involves directly adding the input () to the transformed data (), i.e., they are combined in a fixed ratio of 1:1, as illustrated in Figure 1 ###reference_###. This approach inherently assumes that and contribute equally to the network\u2019s output, which may not be optimal across all recognition scenarios. By treating and as identical in their contribution, the traditional ResNet architecture does not account for the varying importance of and across different layers or diverse training data distributions.\n###figure_1### In this paper, we propose a novel architecture, AdaResNet (Auto-Adapting Residual Network), which enhances the flexibility of ResNet by automatically adapting the contribution of and during training. Specifically, we introduce a learnable parameter, denoted as , which dynamically adjusts the ratio between and based on the training data. Unlike traditional ResNet, where the combination of and remains fixed, AdaResNet allows this ratio to be tuned throughout the training process, thereby improving the network\u2019s ability to generalize across diverse data distributions.\nThe contributions of this paper are threefold:\n(1) Introduction of AdaResNet: We present AdaResNet, a novel extension of the ResNet architecture that incorporates an adaptive mechanism to balance the contributions of skipped input () and processed data (). This approach overcomes the limitations of the fixed 1:1 ratio combination used in traditional ResNet, allowing for more flexible and effective integration of and .\n(2) Learnable parameter : We propose a new learnable parameter, , which is automatically optimized during training. This parameter enables the network to dynamically adjust the balance between and in response to varying data characteristics, improving the model\u2019s adaptability and performance.\n(3)Layer-specific and task-specific characteristics of the learnable parameter: We identify that the optimal weights for skip connections vary not only across different layers within a deep network but also across different training tasks. This insight challenges the conventional one-size-fits-all approach of traditional ResNet, where a uniform weight ratio is applied across all layers, regardless of the specific role of each layer or the nature of the training data.\nThe remainder of this paper is organized as follows. Section II describes the AdaResNet model in detail, including the formulation of the parameter and the corresponding backpropagation. Section III presents our experimental setup and results. In Section IV, we review related work in deep learning architectures and adaptive mechanisms. Finally, Section V concludes the paper."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Model",
15
+ "text": "In very deep networks, gradients can become extremely small during backpropagation, making it difficult to train the early layers. ResNet (Residual Network) addresses this challenge by allowing gradients to flow directly through the network via skip connections, facilitating the training of much deeper networks.\nThe process of transforming input data () to produce output () in a traditional ResNet can be described as in (1 ###reference_###)\nHere, the input is successively transformed by functions . The original input or its less transformed format () is then added via a shortcut connection (identity mapping) to the output of the final transformation , producing the final output through the corresponding activation function .\nIn this process, the sequence of transformations constitutes the main computational pathway in ResNet, and we refer to its output as the transformed data (). On the other hand, or , which is either less processed or directly the input, is utilized to facilitate the training of much deeper networks, and we refer to it as the input represent data () or simply the input.\nTransformed data. The transformed data refers to the output generated after applying a series of operations\u2014such as convolution, batch normalization, and activation functions\u2014on an input within a residual block of a neural network. This data represents the modifications made to the input as it passes through various layers in the block, capturing the learned features and patterns.\nIn a residual block, the transformed data is the result of the main processing path, which typically involves several convolutional layers followed by normalization and activation. This data is then combined with the input represent data (often via addition) to form the output of the residual block, enabling the network to learn more complex functions by effectively adding incremental changes to the input.\nInput represent data. The input represent data refers to the data that is passed directly from the input of a residual block to its output, often without undergoing significant transformation. This data serves as a baseline or identity mapping, allowing the network to retain and propagate the original input features alongside the transformed features from the main processing path.\nIn a residual block, the input represent data typically bypasses the primary convolutional operations and is combined with the transformed data at the block\u2019s output. This bypass, or shortcut connection, helps mitigate issues like vanishing gradients by ensuring that gradients can flow more easily through the network, leading to more effective training of deep models.\nThe combination of not only facilitates easier propagation of gradients to earlier layers but also impacts the final results differently.\nHowever, the contributions of the input represent data and the transformed data may not be equal. To control the influence of each component, we introduce a weight between the input and the transformed data, referred to as the weight of transformed data and input represent data. This weight is denoted by the variable , where stands for the Transformed Data and stands for the Input Represent Data.\nThis approach forms the foundation of the AdaResNet architecture, a variant of the ResNet architecture that incorporates the to modulate the contribution of the input. AdaResNet is closely related to ResNet; for example, AdaResNet50 is based on the ResNet50 architecture but includes this weighted mechanism.\nIn the modified structure, the weight is introduced as shown in Equation (2 ###reference_###).\n###figure_2### The parameter enables the network to learn the optimal influence of the input on the final output . If is learned to be close to zero, the network emphasizes the transformed data over the raw input. Conversely, a larger indicates a greater influence of the raw input. When equals 1, the model functions as a traditional ResNet.\nAdditionally, is automatically adjusted based on the input data. Specifically, it changes dynamically in response to the loss function during training, being updated through the process of gradient descent. We analyze this process in detail in the following section."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Gradient Descent Algorithm",
21
+ "text": "Assuming the loss function is , the update formula for the parameter during each training step is given by (3 ###reference_###).\nwhere:\n- is the learning rate, which controls the step size of the update.\n- is the gradient of the loss function with respect to .\nUsing (3 ###reference_###), is gradually adjusted to minimize the loss function. As a result, changes dynamically during the training process, enabling the model to better fit the data by optimizing the balance between the input represent data (ipd) and the transformed data (tfd). This automatic adjustment process helps improve the final prediction accuracy.\nBelow, we provide a detailed description of the backward pass and the updating process for .\nGiven the output of the model (a simplified representation of the ResNet, where a typical ResNet contains multiple instances of this structure), the loss function measures the difference between the predicted output and the true labels ."
22
+ },
23
+ {
24
+ "section_id": "2.1.1",
25
+ "parent_section_id": "2.1",
26
+ "section_name": "II-A1 Gradient of the Loss Function with Respect to the Output",
27
+ "text": "During the backward pass, the objective is to compute the gradients of the loss function with respect to each of the model\u2019s parameters. These gradients are used to update the parameters in the direction that minimizes the loss.\nThe computation of the gradient () of the loss function is shown in Equation (4 ###reference_###).\nThis gradient indicates how changes in the output of AdaResNet affect the loss function. It is computed by differentiating the loss function with respect to the output of AdaResNet."
28
+ },
29
+ {
30
+ "section_id": "2.1.2",
31
+ "parent_section_id": "2.1",
32
+ "section_name": "II-A2 Gradient of the Output with Respect to",
33
+ "text": "Next, we determine how changes in affect the output . Recall that:\nTaking the partial derivative of with respect to gives:\nAdditionally, we can assign a weight to (the processed intermediary data). However, since this involves a relative relationship between and , we choose to set relative to ."
34
+ },
35
+ {
36
+ "section_id": "2.1.3",
37
+ "parent_section_id": "2.1",
38
+ "section_name": "II-A3 Gradient of the Loss Function with Respect to",
39
+ "text": "By applying the chain rule, the gradient of the loss function with respect to is given by:\nSubstituting the previously computed gradients:\nThis gradient demonstrates how changes in will affect the loss function. It is used to update during the optimization step, which will adjust the relative influence between and .\nAlthough this derivation is based on a simplified form of AdaResNet with a single layer contributing to the output (), the same principles apply to the full AdaResNet architecture, which may have multiple layers (e.g., )."
40
+ },
41
+ {
42
+ "section_id": "2.1.4",
43
+ "parent_section_id": "2.1",
44
+ "section_name": "II-A4 Parameter Update",
45
+ "text": "During the parameter update step, an optimization algorithm (e.g., gradient descent or Adam) uses the computed gradients to update . For gradient descent, the update rule is:\nwhere is the learning rate. This update step is repeated for each batch of training data across multiple epochs, leading to an optimized result from the training data."
46
+ },
47
+ {
48
+ "section_id": "2.2",
49
+ "parent_section_id": "2",
50
+ "section_name": "II-B Training Neural Network with Custom Parameter",
51
+ "text": "Based on the proposed model and backpropagation mechanism, the training process of AdaResNet is as follows."
52
+ },
53
+ {
54
+ "section_id": "2.2.1",
55
+ "parent_section_id": "2.2",
56
+ "section_name": "II-B1 Forward Pass of",
57
+ "text": "- During the forward pass, the custom layer receives inputs and the intermediate result , and then calculates the output as . This output is then passed to subsequent layers or serves as the final model output."
58
+ },
59
+ {
60
+ "section_id": "2.2.2",
61
+ "parent_section_id": "2.2",
62
+ "section_name": "II-B2 Calculating the Loss Function",
63
+ "text": "- The model output is compared with the true labels to compute the loss function (assumed to be )."
64
+ },
65
+ {
66
+ "section_id": "2.2.3",
67
+ "parent_section_id": "2.2",
68
+ "section_name": "II-B3 Backward Pass",
69
+ "text": "- The backpropagation algorithm calculates the gradients of the loss function with respect to the model parameters. During this process, the gradient of is also computed."
70
+ },
71
+ {
72
+ "section_id": "2.2.4",
73
+ "parent_section_id": "2.2",
74
+ "section_name": "II-B4 Updating the Parameters",
75
+ "text": "- The optimizer (such as Adam) updates all trainable parameters, including , based on the computed gradients. This update process is based on the gradient descent algorithm, causing to adjust slightly after each batch of data to minimize the loss function.\nThe process of using can be described in Algorithm 1 ###reference_###."
76
+ },
77
+ {
78
+ "section_id": "2.3",
79
+ "parent_section_id": "2",
80
+ "section_name": "II-C Brief Explanation",
81
+ "text": "In this subsection, we briefly explain the rationale for introducing the weight between the Transformed Data and the Input Represent Data.\nIn the equation , inherently contributes equally to the output as , meaning that both have the same impact on the final output. However, in most cases, we cannot assume this equal contribution. Even within the same scenario, different training data can alter the relationship between these contributions.\nTo formalize this, we introduce a function to describe how much a parameter contributes to the output . In ResNet, both the input represent data ipd and the transformed data tfd contribute to the recognition target . However, in general, we cannot assume that .\nWe use a counterexample to illustrate the need for variable weighting. Assume that the input data has the same weight as the intermediate results. One key feature of ResNet is that it can be extended into many layers. Let us consider two consecutive layers, and , and examine the contributions and .\nIf in layer , where represents the input of the layer, then when the process continues to the next layer , the input data is now , and the transformed data is . The input data of layer is derived from the processed results of layer , and since has undergone non-linear processing (e.g., through the ReLU activation function) in layer , it is difficult to maintain a linear one-to-one relationship between the input data and the transformed data. Therefore, there is no guarantee that the contributions will remain equal in layer , as shown in (II-C ###reference_0###). In fact, as the number of layers increases, it becomes more likely that their contributions will diverge.\nWe conclude, as shown in (II-C ###reference_0###), that in most cases, even if one layer exhibits equal contributions from the input and the transformed data, it is unlikely that all layers will maintain this equality. Consequently, the weights cannot be assumed to be equal across the network.\nTherefore, must be adjusted during the learning process, meaning it should dynamically change throughout training. This dynamic adjustment is crucial for ensuring that the network can effectively capture and utilize relevant features while minimizing the impact of irrelevant or noisy data."
82
+ },
83
+ {
84
+ "section_id": "2.4",
85
+ "parent_section_id": "2",
86
+ "section_name": "II-D Factors influencing",
87
+ "text": "Several factors influence , including:"
88
+ },
89
+ {
90
+ "section_id": "2.4.1",
91
+ "parent_section_id": "2.4",
92
+ "section_name": "II-D1 Dependency on Training Datasets",
93
+ "text": "The first challenge is that can vary significantly depending on the specific training dataset used. Different datasets possess unique distributions and characteristics, necessitating the adaptation of to ensure optimal performance.\nwhere represents sub sets of a training dataset.\nMoreover, this ratio often differs when training on different datasets, such as MNIST and CIFAR-10.\nwhere represents type of the training datasets."
94
+ },
95
+ {
96
+ "section_id": "2.4.2",
97
+ "parent_section_id": "2.4",
98
+ "section_name": "II-D2 Neural Network Architecture",
99
+ "text": "The specific neural network architecture also plays a significant role in determining the optimal value of . Networks with varying depths, widths, and connectivity patterns exhibit distinct learning behaviors, thereby affecting the sensitivity and responsiveness of to changes in the training data. Consequently, the dynamic adjustment of must be tailored to the specific architecture of the neural network in question.\nIn a ResNet network, there are different stages (such as several identity blocks and convolutional blocks111https://github.com/keras-team/keras-applications/blob/master/keras_applications/resnet50.py ###reference_ations/blob/master/keras_applications/resnet50.py###, each of which can be seen as a stage) to use the , those values can also be different in different stage. Thus, to reflect difference in each stage, the weight can be an array with respect to each stage as in (8 ###reference_###). This may increase the complex of neural network, for simplicity, can be a unique one in all stages in some scenarios.\n,where are stage of a neural network. Stage is one place where make input data to be mixed with processed data, such as one identify block or one cov block."
100
+ },
101
+ {
102
+ "section_id": "2.4.3",
103
+ "parent_section_id": "2.4",
104
+ "section_name": "II-D3 Non-Uniqueness of Optimal Values",
105
+ "text": "A further challenge lies in the fact that the optimal value of may not be unique, but rather exist as a set of potential values. This non-uniqueness stems from the inherent complexity and redundancy within neural networks, which often possess multiple solutions that achieve similar levels of performance."
106
+ },
107
+ {
108
+ "section_id": "3",
109
+ "parent_section_id": null,
110
+ "section_name": "III Verification",
111
+ "text": "To validate the effectiveness of the proposed method, we conducted comparative experiments using three different approaches: (1) the proposed method based on ResNet 50 with a trainable weight, AdaResNet (2) the traditional ResNet 50, and (3) a method using a fixed weight (2x) instead of a trainable one. The results over 10 epochs are reported and discussed."
112
+ },
113
+ {
114
+ "section_id": "3.1",
115
+ "parent_section_id": "3",
116
+ "section_name": "III-A Accuracy",
117
+ "text": ""
118
+ },
119
+ {
120
+ "section_id": "3.1.1",
121
+ "parent_section_id": "3.1",
122
+ "section_name": "III-A1 Experimental Setup",
123
+ "text": "The model was trained and evaluated on the CIFAR-10 dataset with ResNet50, as the accuracy by this model is about 40%, which can have enough space to show whether there are some improvement or not (On the other hand, if we use MNIST dataset, its accuracy can achieve more than 99%, if there are some improvement, it still small). The dataset consists of 60,000 32x32 color images in 10 classes, with 50,000 training images and 10,000 test images. The images were normalized to a range of [0, 1] and the labels were converted to one-hot encoded vectors.\nIn this verification, ResNet and AdaResNet are compared. For ResNet, we use the Keras library of TensorFlow. AdaResNet are customed based on the Keras library of TensorFlow too.\nIt is a custom ResNet model modified to incorporate a trainable parameter that scales the input before adding it to an intermediate feature map. This modification is intended to examine the impact of dynamic feature scaling on the performance of the model when applied to the CIFAR-10 dataset. The model is constructed using the Keras framework, and the details of the implementation are outlined below.\nThe implementation includes creating a custom layer within the Keras framework that integrates the trainable parameter . The setup process is summarized in Algorithm 1. For more detailed information, please refer to the code available on GitHub222https://github.com/suguest/AdaResNet.\nThe experiments were performed on the CIFAR-10 dataset. Each method was trained for 10 epochs, and the performance metrics such as accuracy and loss were recorded for both training and validation datasets. Below are the verification results."
124
+ },
125
+ {
126
+ "section_id": "3.1.2",
127
+ "parent_section_id": "3.1",
128
+ "section_name": "III-A2 Results",
129
+ "text": "The professional learning curves is shown in Figure 3 ###reference_### and Figure 4 ###reference_### that illustrate the training and validation accuracy for each method over the 10 epochs.\n###figure_3### ###figure_4### The comparison clearly demonstrates the differences among the three methods.\nThe two methods of AdaResNet show higher accuracy in both accuracies on the training data and test data.\nFor the training data, AdaResNet achieves the highest final test accuracy of 0.81 and 0.72 separately for AdaResNet with two weights and one unified weight, which has more than 0.26 and 0.18 increase in the accuracy than the traditional ResNet method with a accuracy of 0.46.\nFor the test data, the proposed method show an accuracy of 0.71 and 0.63 for two methods of AdaResNet, which has a more accuracy of than 0.25 and 0.18 than that of the traditional method (0.46). The AdaResNet with two separate weights has an increase of 54.35% increase of traditional ResNet.\nWhen comparison of two methods of AdaResNet, one with the unified weight and another with separate weights, the method with separate weights has more accuracy improvement. This indicates that there are different relationship among the input and intermediate process results between the identify block and conv block.\nFrom the above results, it indicates that the trainable weight effectively balances the influence of the raw input and the transformed data, leading to improved learning and generalization."
130
+ },
131
+ {
132
+ "section_id": "3.2",
133
+ "parent_section_id": "3",
134
+ "section_name": "III-B Weights Impact",
135
+ "text": "In this section, we aim to verify that is a dynamic parameter rather than a fixed value, capable of adapting to different training tasks and varying even within the same task across different training iterations.\nFor a better comparison, we output the after each training is done, i.e. to iterate to output the of each layer, as shown in 3 ###reference_###."
136
+ },
137
+ {
138
+ "section_id": "3.2.1",
139
+ "parent_section_id": "3.2",
140
+ "section_name": "III-B1 Whether is a single value or not",
141
+ "text": "In this subsection, we aim to determine whether the remains consistent across runs. We conducted three separate runs of AdaResNet, all starting with the same initial parameters and using the same training data (CIFAR-10), with each model trained for 10 epochs. The results are shown in Figure 5 ###reference_### and table I ###reference_###.\n###figure_5### From Figure 5 ###reference_###, we can see that the weights values are different in different layers. This indicates that it is not suitable to use a fixed value for the combination of input and the intermediately processed data. We also combined to use a fixed ratio among the input data the intermediately processed data of 2 as shown in Figure 6 ###reference_###, which also shows a higher accuracy than to use the dynamic .\nThe difference of weight in different test rounds can also be seen in Table I ###reference_###. The weight values across the three test rounds exhibit variations, indicating that the weights differ between layers. For instance, at layer 1, the weights are -0.2872, 0.2799, and 0.3222 for rounds one, two, and three, respectively, demonstrating a significant range of approximately 0.61 between the lowest and highest values. Similarly, at layer 5, the weights are -1.7673, -1.9361, and -1.9803, again showing variability with a difference of about 0.21. These differences underscore that the weights in each layer are not consistent across different rounds of testing, which can be attributed to factors such as random initialization and the stochastic nature of the training process.\n###figure_6###"
142
+ },
143
+ {
144
+ "section_id": "3.2.2",
145
+ "parent_section_id": "3.2",
146
+ "section_name": "III-B2 Whether is different among different training task",
147
+ "text": "While for different classification tasks, such as for MNIST, the weights have big difference. For the weights of MNIST, we also carry out three verification, the results are shown in Figure 7 ###reference_### and table II ###reference_###.\n###figure_7### Thus, we analyze the difference between within group and between groups. The variance of two groups of weight data, representing the CIFAR-10 and MNIST datasets, was analyzed using absolute values. The CIFAR-10 and MNIST groups each comprised three sets of eight weights. The within-group variance was computed by averaging the variance across the corresponding columns within each group, while the between-group variance was calculated by assessing the variance between the mean values of the columns across the two groups.\nThe results revealed a within-group variance of 0.0113 for the CIFAR-10 group and 0.0074 for the MNIST group, indicating that the CIFAR-10 group exhibits slightly higher variability among its data points compared to the MNIST group. Furthermore, the between-group variance was calculated to be 0.1205, which is significantly higher than both within-group variances. This suggests that the differences between the mean values of the CIFAR-10 and MNIST groups are more pronounced than the variations observed within each group. Overall, the analysis highlights that the between-group differences are more substantial than the differences within the individual groups, with CIFAR-10 showing a marginally greater degree of internal variability than MNIST."
148
+ },
149
+ {
150
+ "section_id": "4",
151
+ "parent_section_id": null,
152
+ "section_name": "IV Related Work",
153
+ "text": "The development of deep neural networks has been one of the most significant advancements in artificial intelligence, with ResNet (Residual Network) standing out as a groundbreaking architecture. Since its introduction by He et al. in 2016 [4 ###reference_b4###], ResNet has become a cornerstone in the design of deep networks, particularly for tasks in computer vision such as image classification, object detection, and segmentation."
154
+ },
155
+ {
156
+ "section_id": "4.1",
157
+ "parent_section_id": "4",
158
+ "section_name": "IV-A Residual Networks and Skip Connections",
159
+ "text": "The concept of residual learning was introduced to address the degradation problem in deep neural networks, where adding more layers to a network does not necessarily lead to better performance and often results in higher training error. ResNet\u2019s innovative use of skip connections allows the network to learn residual mappings instead of directly learning unreferenced functions [5 ###reference_b5###]. This approach effectively mitigates the vanishing gradient problem, as gradients can propagate more easily through the network. The original ResNet paper demonstrated that networks with over 100 layers could be trained successfully [6 ###reference_b6###], a feat previously unattainable with traditional deep architectures.\nWhile ResNet has achieved remarkable success, several extensions and modifications have been proposed to further enhance its performance. For example, Wide ResNet [1 ###reference_b1###] [7 ###reference_b7###] explores the effect of increasing the width of the network (i.e., the number of channels) instead of just depth, leading to improved performance on various datasets. Another variation, ResNeXt [8 ###reference_b8###], introduces a cardinality dimension, allowing for a more flexible combination of feature maps, which has been shown to improve accuracy and efficiency."
160
+ },
161
+ {
162
+ "section_id": "4.2",
163
+ "parent_section_id": "4",
164
+ "section_name": "IV-B Adaptive Mechanisms in Neural Networks",
165
+ "text": "The idea of incorporating adaptive mechanisms into neural networks has gained traction as researchers seek to make models more flexible and responsive to varying data distributions. Squeeze-and-Excitation Networks (SENet) [9 ###reference_b9###], for instance, adaptively recalibrate channel-wise feature responses by explicitly modeling interdependencies between channels. This enables the network to focus on the most informative features, leading to significant performance gains in image recognition tasks.\nAnother line of research focuses on adaptive learning rates and weights within networks. For example, the use of adaptive learning rates in algorithms such as Adam [10 ###reference_b10###] and RMSprop [11 ###reference_b11###] has become standard practice in training deep networks, allowing for faster convergence and better generalization.\nHowever, adaptive mechanisms within the architecture itself, such as the one proposed in our AdaResNet, are less explored. Existing methods typically focus on global adjustments, such as learning rates, rather than on dynamically altering the flow of information within the network. The Dynamic Convolution [12 ###reference_b12###] approach is a notable exception, where convolutional kernels are dynamically adjusted based on input features. However, it does not address the specific challenges posed by skip connections in residual networks."
166
+ },
167
+ {
168
+ "section_id": "4.3",
169
+ "parent_section_id": "4",
170
+ "section_name": "IV-C Limitations of Traditional Residual Networks",
171
+ "text": "Despite the successes of ResNet and its variants, the uniform treatment of the input () and processed data () in skip connections remains a limitation. Traditional ResNet adds and without considering the varying importance of these components across different layers or training data conditions. This uniformity can lead to suboptimal performance, especially in cases where the relative importance of and differs significantly.\nTo address this issue, several approaches have been proposed to modify the skip connections in ResNet. For example, the Mixed-Scale Dense Network (MSDNet) [13 ###reference_b13###] adapts the receptive field sizes across the network but does not dynamically adjust the skip connections themselves. Similarly, Highway Networks [14 ###reference_b14###] introduce gates to control the flow of information through the network, but these gates are static once trained and do not adapt during training."
172
+ },
173
+ {
174
+ "section_id": "4.4",
175
+ "parent_section_id": "4",
176
+ "section_name": "IV-D Our Contribution",
177
+ "text": "Our proposed AdaResNet builds on this body of work by introducing an adaptive mechanism specifically for skip connections in residual networks. By allowing the ratio of to , represented by the learnable parameter , to be adjusted dynamically during training, AdaResNet provides a more flexible and data-responsive architecture. This approach not only addresses the limitations of traditional ResNet but also leverages the strengths of adaptive learning to enhance performance across a range of tasks and datasets.\nIn summary, while significant progress has been made in the design and optimization of deep neural networks, the uniform treatment of skip connections in residual networks presents a limitation that has yet to be fully addressed. AdaResNet represents a novel contribution in this area, introducing a dynamic and adaptive approach to residual learning that we believe will offer significant benefits in terms of both accuracy and generalization."
178
+ },
179
+ {
180
+ "section_id": "5",
181
+ "parent_section_id": null,
182
+ "section_name": "Conclusion",
183
+ "text": "In this paper, we introduced AdaResNet, a novel extension of the ResNet architecture that incorporates an adaptive mechanism for dynamically balancing the contributions of skipped input () and processed data (). Traditional ResNet models rely on a fixed 1:1 ratio for combining and , which can be suboptimal in various training scenarios. AdaResNet addresses this limitation by introducing a learnable parameter, , which is automatically optimized during training. This allows the network to adjust the ratio between and in response to the specific characteristics of the data, thereby enhancing the model\u2019s adaptability and overall performance.\nOur experimental results demonstrate that AdaResNet consistently outperforms the traditional ResNet architecture, particularly in tasks where the relative importance of and varies across different layers and datasets. We also highlighted the critical insight that the optimal weights for skip connections differ across layers and tasks, challenging the conventional approach of using a uniform weight ratio across the entire network.\nBy leveraging adaptive skip connections, AdaResNet not only improves accuracy and efficiency but also offers a more nuanced and flexible approach to deep network design. This work opens up new possibilities for further exploration of adaptive mechanisms in neural networks, with potential applications across various domains in deep learning.\nFuture work will focus on extending the AdaResNet framework to other network architectures and exploring the impact of adaptive mechanisms in different types of neural networks, such as those used in natural language processing and reinforcement learning. Additionally, we plan to investigate the theoretical underpinnings of adaptive skip connections to better understand their role in improving network generalization and robustness."
184
+ }
185
+ ],
186
+ "appendix": [],
187
+ "tables": {
188
+ "1": {
189
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Weights in Different Layers for Three Rounds of Testing (cifar10)</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1.1\">Layer</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.2.1\">round_1</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.3.1\">round_2</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.4.1\">round_3</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.1.1\">1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.1.2\">-0.28722298</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.1.3\">0.27989703</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.1.4\">0.32219923</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.2.1\">2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.2.2\">-0.41371468</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.2.3\">-0.28776032</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.2.4\">-0.30848</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.3.1\">3</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.3.2\">-0.37947246</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.3.3\">-0.3051696</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.4.3.4\">-0.5491747</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.4.1\">4</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.4.2\">0.8734257</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.4.3\">1.1673123</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.4.4\">0.84171796</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.5.1\">5</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.5.2\">-1.7672663</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.5.3\">-1.9361044</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.6.5.4\">-1.9803141</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.7.6.1\">6</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.7.6.2\">1.7821076</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.7.6.3\">1.7983766</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.7.6.4\">2.0427594</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.8.7.1\">7</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.8.7.2\">-1.1800854</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.8.7.3\">1.2597568</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.8.7.4\">1.1798627</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.9.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.9.8.1\">8</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.9.8.2\">-0.82326496</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.9.8.3\">-0.8402289</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.1.9.8.4\">-0.8131428</td>\n</tr>\n</tbody>\n</table>\n</figure>",
190
+ "capture": "TABLE I: Weights in Different Layers for Three Rounds of Testing (cifar10)"
191
+ },
192
+ "2": {
193
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Weights in Different Layers for Three Rounds of Testing (MNIST)</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.1.1\">Layer</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.2.1\">round_1</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.3.1\">round_2</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.4.1\">round_3</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.2.1.1\">1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.2.1.2\">0.44887054</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.2.1.3\">0.4484792</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.2.1.4\">-0.5003674</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.3.2.1\">2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.3.2.2\">-0.34602356</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.3.2.3\">-0.35169616</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.3.2.4\">-0.31584582</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.4.3.1\">3</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.4.3.2\">-0.74334604</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.4.3.3\">-0.5807008</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.4.3.4\">0.8818225</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.5.4.1\">4</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.5.4.2\">0.5266892</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.5.4.3\">0.3835334</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.5.4.4\">0.43830293</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.6.5.1\">5</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.6.5.2\">-3.0067017</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.6.5.3\">-2.7609563</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.6.5.4\">-2.7376952</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.7.6.1\">6</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.7.6.2\">2.1653237</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.7.6.3\">2.065729</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.7.6.4\">2.4824123</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.8.7.1\">7</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.8.7.2\">-2.8167214</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.8.7.3\">-2.9216428</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.8.7.4\">2.5657778</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.9.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.9.8.1\">8</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.1.9.8.2\">-0.8365008</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.1.9.8.3\">-0.94025135</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.1.9.8.4\">-0.9289533</td>\n</tr>\n</tbody>\n</table>\n</figure>",
194
+ "capture": "TABLE II: Weights in Different Layers for Three Rounds of Testing (MNIST)"
195
+ }
196
+ },
197
+ "image_paths": {
198
+ "1": {
199
+ "figure_path": "2408.09958v1_figure_1.png",
200
+ "caption": "Figure 1: ResNet to add the input and intermediately processed directly to increase gradients for deep neural network",
201
+ "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/whyResNET.png"
202
+ },
203
+ "2": {
204
+ "figure_path": "2408.09958v1_figure_2.png",
205
+ "caption": "Figure 2: Incorporating weighting into residual learning and blocks",
206
+ "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/beta_diagram.png"
207
+ },
208
+ "3": {
209
+ "figure_path": "2408.09958v1_figure_3.png",
210
+ "caption": "Figure 3: Comparison of training accuracy",
211
+ "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/accuracy_comparison.png"
212
+ },
213
+ "4": {
214
+ "figure_path": "2408.09958v1_figure_4.png",
215
+ "caption": "Figure 4: Comparison of test accuracy",
216
+ "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/testAccuracyAndDiff.png"
217
+ },
218
+ "5": {
219
+ "figure_path": "2408.09958v1_figure_5.png",
220
+ "caption": "Figure 5: Weights of different layers",
221
+ "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/weightComparison.png"
222
+ },
223
+ "6": {
224
+ "figure_path": "2408.09958v1_figure_6.png",
225
+ "caption": "Figure 6: Accuracy comparison of fixed weight",
226
+ "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/comparisonOfFixedWeight.png"
227
+ },
228
+ "7": {
229
+ "figure_path": "2408.09958v1_figure_7.png",
230
+ "caption": "Figure 7: Weights of different layers for MNIST",
231
+ "url": "http://arxiv.org/html/2408.09958v1/extracted/5800165/weightComparisonMNIST.png"
232
+ }
233
+ },
234
+ "validation": true,
235
+ "references": [],
236
+ "url": "http://arxiv.org/html/2408.09958v1"
237
+ }
20240819/2408.10381v1.json ADDED
@@ -0,0 +1,464 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Efficient Reinforcement Learning in Probabilistic Reward Machines",
3
+ "abstract": "In this paper, we study reinforcement learning in Markov Decision Processes with Probabilistic Reward Machines (PRMs), a form of non-Markovian reward commonly found in robotics tasks. We design an algorithm for PRMs that achieves a regret bound of , where is the time horizon, is the number of observations, is the number of actions, and is the number of time-steps. This result improves over the best-known bound, of Bourel et al. (2023) for MDPs with Deterministic Reward Machines (DRMs), a special case of PRMs. When and , our regret bound leads to a regret of , which matches the established lower bound of for MDPs with DRMs up to a logarithmic factor. To the best of our knowledge, this is the first efficient algorithm for PRMs. Additionally, we present a new simulation lemma for non-Markovian rewards, which enables reward-free exploration for any non-Markovian reward given access to an approximate planner.\nComplementing our theoretical findings, we show through extensive experiment evaluations that our algorithm indeed outperforms prior methods in various PRM environments.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Reinforcement learning traditionally focuses on the setting where the reward function is Markovian, meaning that it depends solely on the current state and action, and independent of history. However, in many real-world scenarios, the reward is a function of the history of states and actions. For example, consider a robot tasked with patrolling various locations in an industrial park. The performance of robot is measured by how thorough it regularly covers different zones in the park, which cannot easily be represented as a function of its current state and action, but rather would depend on its whole trajectory during the patrol.\nOne emerging tool to model such performance metrics is called the Reward Machine (RM)(Icarte et al., 2018 ###reference_b16###, 2022 ###reference_b17###), which is a Deterministic Finite-State Automaton (DFA) that can compress the sequence of past events into one single state. Combined with the current observation, the state of RM can fully specify the reward function. Hence, for an MDP with RM, we can obtain an equivalent cross-product MDP by leveraging the information of RM(see Lemma 1 ###reference_ma1###) and applying off-the-shelf RL algorithms e.g., Q-learning of Sutton and Barto (2018 ###reference_b27###) to learn an optimal policy. However, as we shall see later, this naive approach will incur a large regret.\nOne limitation of the classic RM framework is that the transition between the state of RM is restricted to be deterministic, whereas stochastic transitions are much more common in practice, especially with uncertainty in the environment. For instance, suppose a robot working in a warehouse is tasked with managing a warehouse by performing simple tasks of fetching and delivering items (as shown in Figure 1 ###reference_###). The robot starts at a charging station, navigates to the item pickup location, collects the item, and then proceeds to the delivery location to deliver the item and receives a reward. Based on past experience and pre-collected data: there is a percent chance that the item at the pickup location is not ready, requiring the robot to wait until the item is ready, and a percent chance that the delivery location is occupied, causing the robot to wait before delivering the item. The robot is rewarded only when it successfully collects and delivers the item in sequence. The setting where the rewards can exhibit non-Markovian and stochastic dynamics can be represented as Probabilistic Reward Machine(PRM)(Dohmen et al., 2022 ###reference_b12###).\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### In this paper, we investigate RL in Markov decision processes with probabilistic reward machines. We formalize the regret minimization problem within the episodic MDP with PRM setting and introduce an algorithm, UCBVI-PRM, a UCB-style model-based RL algorithm with novel steps specifically tailored to PRMs. Our algorithm achieves a regret bound of that matches the established lower bound of for MDPs with PRMs up to a logarithmic factor. Additionally, we present a new simulation lemma that characterizes the difference in policy evaluations between two MDPs with generic non-Markovian rewards. Based on the lemma, we design a reward-free exploration algorithms that can collect data with sufficient coverage to learn a near-optimal policy under any non-Markovian reward in downstream tasks. Finally, we conduct experiments to showcase the efficiency of UCBVI-PRM."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Problem Formulation",
21
+ "text": "We start with a few definitions."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Learning Algorithms and Results",
27
+ "text": "Input: Bonus algorithm bonus\nInitialize:\nIn this section, we present our RL algorithm for PRMs, UCBVI-PRM. UCBVI-PRM follows the algorithmic skeleton of a classic model-based RL algorithm (Azar et al., 2017 ###reference_b2###), while incorporating designs that leverage the structure of PRMs. Our key contribution is a regret bound of when is large enough and . The regret bound matches the established lower bound up to a logarithmic factor for MDP with DRM, and is notably independent of the joint state space size.\nIntuitively, UCBVI-PRM (Algorithm 1 ###reference_###) proceeds in 3 stages:\n(i) From lines 1 to 7, the algorithm first constructs an empirical transition matrix based on the data collected thus far; (ii) Using this empirical transition matrix, the algorithm then performs value iteration from lines 8 to 23 to update the value function.\nNotably, between lines 8 and 19, the algorithm computes the new action-value function using the updated empirical transition matrix (line 12) and the exploration bonus (line 13); (iii) Finally, from lines 24 to 28, the agent selects actions based on the updated action-value function and collects new data, which is then incorporated into the dataset.\nThe main technical novelty lies in how we utilize the PRM structure. Denote a function that measures the expected return when being in state , executing action at time step and observing at time step . is defined as follows:\nis similar to value function in the sense that both are expected returns but condition on different random variables. Consequently, the estimation error can be characterized by instead of , and our bonus will be a function of instead of . More precisely, the estimation error can be translated to the estimation error in the observation space .\nWe utilize a Bernstein-type reward bonus to ensure that serves as an upper bound for with high probability, a common technique in the literature (Azar et al., 2017 ###reference_b2###; Zanette and Brunskill, 2019 ###reference_b32###; Zhang et al., 2021b ###reference_b35###). Unlike previous works that directly use , we leverage our knowledge of Probabilistic Reward Machines (PRMs) to construct our bonus using . This approach results in the regret associated with our bonus design growing only in the order of instead of .\n(Regret bound for UCBVI-PRM)\nConsider a parameter . Then the regret of UCBVI-PRM is bounded w.p. at least , by\nwhere .\nNotice that the leading term of the regret does not scale with . In contrast, if one were to apply an off-the-shelf RL algorithm to the cross-product MDP, it could achieve a regret bound no better than (Auer et al., 2008 ###reference_b1###).\nIn the work of Bourel et al. (2023 ###reference_b5###), their algorithms i.e., UCRL2-RM-L and UCRL2-RM-B achieve a regret bound of and in DRMs, respectively. Compared to their results, we improve the regret bound by a factor of and respectively, while generalizes to the PRM setting."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "RL with Non-Markoivian Rewards",
33
+ "text": "In this section, we introduce an explore-then-commit style algorithm for MDPs with generic non-Markovian rewards: in the exploration stage, the agent collects trajectories from the environment without the information of rewards; in the planning stage, the agent computes a near-optimal policy given the data gathered in the exploration stage, assuming access to an approximate planner. We give an efficient algorithm that conducts episodes of exploration and returns an -optimal policy for any general non-Markovian rewards, given an -approximate planner, formally stated below.\nA planner is -approximate if given any NMRDP , the planner returns a policy that satisfies\nwhere is the expected return of executing policy in and is the optimal policy in .\nThe exists an absolute constant , such that, for any , with probability at least , given the information collected by algorithm 3 ###reference_###, algorithm 4 ###reference_### can output -optimal policies for any non-Markovian rewards assuming access to -approximate planner. The number of episodes in algorithm 3 ###reference_### is bounded by\nwhere .\nThis result is made possible by a new simulation lemma that can be applied to generic non-Markovian rewards and non-Markovian policies.\nFor any two NMRDPs and , for any policy\nwhere\nThe proof can be found in the Appendix D.1 ###reference_###. This lemma characterizes the performance difference of the same policy applied to two Non-Markovian Reward Decision Processes (NMRDPs) that differ in their transition kernels. The performance gap is determined by the divergence in the transition kernel , the occupancy measure induced by the policy in , and the return upperbound .\nThis lemma is key to establish our result, because it can be applied to any non-Markovian reward and non-Markovian policy, including Markovian rewards and PRMs as special cases.\nIntuitively, this lemma provides a concrete goal for the exploration stage, i.e. the gap must be small for any pair that is visited significantly often under . In the following, we show how to achieve this goal."
34
+ },
35
+ {
36
+ "section_id": "5.1",
37
+ "parent_section_id": "5",
38
+ "section_name": "Exploration stage",
39
+ "text": "It turns out that a procedure (Algorithm 3 ###reference_###) similar to the Markovian reward case suffices for our purpose (Jin et al., 2020 ###reference_b18###). Intuitively, algorithm 3 ###reference_### perform two steps. In the first step, from lines 2 to 7, the algorithm constructs a set of exploratory policies each designed to visit an observation state as often as possible. To accomplish this, for each observation , the algorithm creates a reward function that is 0 everywhere except at observation , where it is set to 1 (line 3). The algorithm then employs a no-regret RL algorithm (e.g. EULER of Zanette and Brunskill (2019 ###reference_b32###)) to find a policy that maximizes this reward, which is equivalent to maximizing the probability of visiting . In the second stage, from lines 8 to 12, the algorithm collects new data by sampling and executing policies from this exploratory policy set. We prove that, with this framework, the collected data can be used to learn a transition kernel that is sufficiently close to the true transition characterized by the divergence in Lemma 2. Towards this, we introduce the notion of significant observation:\nA observation is -significant if there exists a policy , so that the probability to reach following policy is greater than . In symbol:\nwhere .\nIntuitively, since insignificant observations are rarely reachable by any policy, their contributions to the divergence in Lemma 2 will be limited. Thus, it suffices to only visit significant observations. Algorithm 3 ###reference_### is designed specifically for this purpose, and achieves the following guarantee.\n(Theorem 3 of (Jin et al., 2020 ###reference_b18###))\nThere exists absolute constant such that for any and , if we set where , then with probability at least , that Algorithm 3 ###reference_### will return a dataset consisting of trajectories , which are i.i.d sampled from a distribution satisfying:\nAs we can see from theorem 3 ###reference_orem3###, all significant observations can be visited by distribution with reasonable probability. Hence, with algorithm 3 ###reference_###, we can learn our model by visiting significant observations without the guidance of any rewards."
40
+ },
41
+ {
42
+ "section_id": "5.2",
43
+ "parent_section_id": "5",
44
+ "section_name": "Planning stage",
45
+ "text": "After collecting enough data in the exploration stage, algorithm 4 ###reference_### use the data to compute an empirical transition matrix , on which the approximate planner is employed. We prove that (see Appendix D.2 ###reference_###), any policy will have small value gap in the learned transition under vs. the ground truth transition .\nThere exists an absolute constant , for any , , assume the dataset has i.i.d. samples from distribution which satisfies equation 1 ###reference_### with , and , where , then w.p. at least :\nThe reason for the increased sample complexity compared to the original analysis by Jin et al. (2020 ###reference_b18###) lies in the fact that more samples are required to reduce the model error associated with significant observations than in the Markovian setting. Specifically, in our analysis, it is necessary to account for model errors across every triplet. In contrast, in the standard Markovian setting, the modeling error can be further decomposed into the error of the empirical next-state value function (see the proof of Lemma 3.6 in Jin et al. (2020 ###reference_b18###)), which allows for tighter bounds. After we obtain our empirical transition matrix , given any non-Markovian rewards , we can find a near optimal policy by running -approximate planner, as a result of our simulation Lemma.\nUnder the preconditions of lemma 3 ###reference_ma3###, with probability of for any rewards , the output policy of algorithm 4 ###reference_### is -optimal, that is\nwhere is the optimal policy.\nNote that, for general non-Markovian rewards, the optimization error won\u2019t be reduced to , but for any PRMs, the optimization can be reduced to , since we can run value iteration given the cross-product MDP and solve it optimally. In addition, there are some cases where the rewards possess special structural properties, for which performance with guarantees can be achieved(Prajapat et al., 2023 ###reference_b24###; De Santi et al., 2024 ###reference_b10###)."
46
+ },
47
+ {
48
+ "section_id": "6",
49
+ "parent_section_id": null,
50
+ "section_name": "Experiments",
51
+ "text": "###figure_7### ###figure_8### In this section, we present a series of experiments comparing the empirical performance of our algorithm, UCBVI-PRM, with existing baselines. We evaluate our algorithm in MDPs with both DRM and PRM against different baselines. For DRMs, we compare with UCRL-RM-L and UCRL-RM-B of Bourel et al. (2023 ###reference_b5###). For PRM, since there is no existing algorithm, we compare with the naive approach of directly applying UCBVI(Azar et al., 2017 ###reference_b2###) onto the cross-product MDP.\nIn our experiment, we tune the exploration coefficient for all algorithms by selecting from a equally large set of options (see Appendix E.2 ###reference_###). This is to make sure that an algorithm with a larger hyper-parameter set does not get an unfair advantage. In addition, we apply the doubling trick (detailed in Appendix E.1 ###reference_###) to speed up UCBVI-PRM, which is a common technique in the literature(Auer et al., 2008 ###reference_b1###; Dann and Brunskill, 2015 ###reference_b9###) and won\u2019t affect the regret."
52
+ },
53
+ {
54
+ "section_id": "6.1",
55
+ "parent_section_id": "6",
56
+ "section_name": "DRM Experiments",
57
+ "text": "In the RiverSwim environment, shown in Figure 2 ###reference_###, the agent has two actions corresponding to swimming left or right. Going right results in stochastic transitions, as shown by the solid lines in Figure 2 ###reference_###(a). Going left results in deterministic transitions as shown by the dashed lines in Figure 2 ###reference_###(a). The agent receives reward when visit two extreme locations in RiverSwim(i.e., and ) in sequence.\nFigure 3 ###reference_###(a), 3 ###reference_###(b), and 3 ###reference_###(c) show the regret over time in the RiverSwim domain, with the results averaged over 16 runs. The shaded area shows the standard variance of the corresponding quantity. Specifically, Figures 3 ###reference_###(a), 3 ###reference_###(b), and 3 ###reference_###(c) present the regrets of the agent running in a RiverSwim MDP with 5 observations and a horizon length of 10, a RiverSwim MDP with 10 observations and a horizon length of 20, and a RiverSwim MDP with 15 observations and a horizon length of 30, respectively. As we can see from the figures, in simpler environments (fewer observations and shorter horizons), the advantage of UCBVI-PRM is not obvious (see Figure 3 ###reference_###(a)). However, with longer horizons and more observations, the gap between UCBVI-PRM and the baselines of Bourel et al. (2023 ###reference_b5###) becomes larger. These results align with our regret analysis, where the regret of UCBVI-PRM grows slower than UCRL-RM-L in the order of and slower than UCRL-RM-B in the order of and .\n###figure_9### ###figure_10### ###figure_11###"
58
+ },
59
+ {
60
+ "section_id": "6.2",
61
+ "parent_section_id": "6",
62
+ "section_name": "PRM Experiments",
63
+ "text": "In the warehouse environment(see Figure 1 ###reference_###), the robot has five actions corresponding to moving up, right, down, left, and stay. Moving up, right, down, or left leads to moving in the intended direction with probability 0.7, in each perpendicular direction with probability 0.1, or staying in the same place with probability 0.1. The stay action will result in the robot staying in the same place deterministically. The robot receives reward when successfully picks up an item and delivers it to the delivery location in sequence.\n###figure_12### ###figure_13### ###figure_14### Figures 4 ###reference_### show the regret over time, with the results averaged over 16 runs. Specifically, Figures 4 ###reference_###(a), 4 ###reference_###(b), and 4 ###reference_###(c) present the results of the agent running in a warehouse with a horizon length of 9, a warehouse with a horizon length of 12, and a warehouse with a horizon length of 15, respectively. In all experiments, UCBVI-PRM outperforms the baseline. In addition, as the horizon becomes longer and with larger warehouse, UCBVI-PRM beats the baseline with a larger margin."
64
+ },
65
+ {
66
+ "section_id": "7",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusion",
69
+ "text": "We study sample-efficient reinforcement learning in episodic Markov decision processes with probabilistic reward machines. We introduce an algorithm tailored for PRMs that matches the established lower bound when and . We also present a lemma that characterizes the difference in policy evaluations between two MDPs with non-Markovian rewards. Building upon the new lemma, we establish the reward-free learning result for non-Markovian reward. Finally, we validate our algorithms through a series of experiments. Interesting future direction includes designing effective and efficient algorithms for the multi-agent setting, and exploring connections with reward structures such as submodular rewards."
70
+ }
71
+ ],
72
+ "appendix": [
73
+ {
74
+ "section_id": "Appendix 1",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix A Table of notation",
77
+ "text": ""
78
+ },
79
+ {
80
+ "section_id": "Appendix 2",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix B Notation",
83
+ "text": "We follow the notations of Azar et al. [2017 ###reference_b2###].\nWe use the lower case to denote the functions evaluated at the current state-action pair. e.g., let . We also define . We also denote and .\nWe split the episodes into sets: the set of \"typical\" episodes in which the number of visits to the encountered state-actions are large and the rest of the episodes. Further, we define\nWe define typical episodes and the set of typical state-dependent episodes as follows\nDefine , , , and for every , and . We denote Then the upper-bound regret is defined as follows\nWe also define regret of every state and its corresponding upper bounds as\nWe define the following martingale operator for every , and , let denote the time stamp at step of episode then\nLet be the history of all random events up to (and including) step of episode then we have that . Hence is a martingale difference w.r.t. .\nDenote . We define the high probability events and . We define four confidence levels as follows.\nLet be the set of all probability distributions on . Define the following confidence set for every and .\nWe now define the random event as follows\nLet be a positive integer, and let be a set of real-valued functions defined on for some integer . We define the following random events for given parameters , , and :\nWe use the short-hand notation and for and . We define a set of random variables for every and\nWe now define the event as follows\nwhere is number of visits to observation at step up to episode , correspondingly, we have as number of visits to observation at step up to episode .\nWe denote the total count of steps up to episode by . We define for every and , as follows\nFor every , and , we introduce the following notation.\nWe also define the upper bound for every , and as follows"
84
+ },
85
+ {
86
+ "section_id": "Appendix 3",
87
+ "parent_section_id": null,
88
+ "section_name": "Appendix C Proof of the Regret Bounds",
89
+ "text": "Fix . Let the event hold. For all , , , , , , we have that for any :\nwhere we denote ,\nThe first inequality holds under the event . The pigeonhole principle is applied to the second inequality, the Cauchy-Schwarz inequality to the third, and the fact that to the fourth. The final inequality uses the condition .\n\u220e\nLet and . Let the events and hold. Then the following inequalities hold for and :\nThis analysis builds on the regret analysis of Azar et al. [2017 ###reference_b2###], but introduces novel design for the property of PRMs.\nDefine , . We drop the dependencies of notation on episode for easier presentation, e.g., we write and as and .\nwhere for , , , is the estimation error of the optimal value function at the next state. is the estimation error of the reward function. \n, we further have\nBased on lemma 7 ###reference_ma7###,\nwhere . Hence\nwhere .\nUnrolling this recursion, we can have\nThe last inequality is based on the fact that .\n\u220e\nLet and . Let the events and hold. Then the following hold for every\nAlso the following bounds hold for every and\nThe fact that the event holds implies that the event , , and hold. Combined with the fact that , the first argument is proved.\nFor the second argument, with the event , we can have\nIt is easy to verify that , . Combined with this inequality, we complete the proof of the second argument.\n\u220e\nLet and . Let the events and hold. Then the following hold for every\nBy definition of , the results of lemma 8 ###reference_ma8### and lemma 9 ###reference_ma9###, we can get\nConsequently, we can have\nProof is completed.\n\u220e\nLet and . Then under the events and the following hold\nUnder the event , and hold, hence we have\nThe law of total variance leads to\nCombining with the fact that and lemma 5 ###reference_ma5###, we complete our proof.\n\u220e\nLet and . Then under the events and the following hold\nBy definition, we have\nFor a better presentation, we make some short-hand notation: we denote , , and . Further\nThe first inequality is derived from the definition of variance and the conditions , which implies .\nUsing the same argument, we can also have the following:\nUnder the event , the event holds. Plus, under event , we have . These combined can be used to prove that:\nUnder the event , the event holds, and under the event , we can bound with:\nThe last inequality is based on the fact that . We complete our proof by multiplying and by .\n\u220e\nLet and . Then under the events and the following hold\nWe use the same notation as lemma 12 ###reference_ma12###. We denote , , and . We only need to prove the first argument, since the second inequality can be proved in a similar manner. The only difference is that and are replaced by and , respectively. We start by proving the first inequality, and the following inequalities hold:\nThe first inequality holds because under , and consequently, . The second inequality holds by enlarging confidence interval under event .\nThe first inequality holds for the event . The second inequality holds because of the pigeon-hole argument(see e.g., Appendix C.3 of Auer et al. [2008 ###reference_b1###]).\nThe first inequality hold by using the same argument of lemma 12 ###reference_ma12###.\nBy applying another pigeon-hole principle.\n\u220e\nLet and . Then under the events and the following hold\nWe only need to prove the first argument, since the second inequality can be proved in a similar manner. The only difference is that and are replaced by and , respectively. Hence, we start by proving the first argument:\nand can be bounded under the events and using the results of lemma 11 ###reference_ma11### and lemma 12 ###reference_ma12###. Hence,\nThe second inequality is based on the fact that .\n can be bounded by using pigeon-hole principle\nCombining all these and the condition that , we complete the proof.\n\u220e\nLet and . Then under the events and the following hold\nWe only need to prove the first argument, since the second inequality can be proved in a similar manner. The only difference is that and are replaced by and , respectively. Hence, we start by proving the first argument:\nWe can use similar technique to lemma 14 ###reference_ma14### to bound .\nand can be bounded under and using lemma 11 ###reference_ma11### and lemma 13 ###reference_ma13###. Hence\nThe last inequality is based on the fact that . Applying pigeon-hole principle to , is bounded by\ncan be bounded by by the pigeon-hole principle and using the concentration inequality under . can be bounded by since it is sum of the martingale difference sequence. can be bounded by by using the pigeon-hole principle. Hence\nThe last inequality is based on the fact that . Combining all these, we complete our proof.\n\u220e\nUnder the events and , the following hold\nUnder the event , can be bounded by using the pigeon-hole principle. Similarly, can be bounded by using pigeon-hole principle. Then, we sum up the regret due to , and from lemma 15 ###reference_ma15###, lemma 14 ###reference_ma14### and lemma 9 ###reference_ma9### to bound . The following holds:\nBy solving the bound in terms of , we complete our proof.\n\u220e\nLet and . Then under the events and the following hold\nThe proof is similar to the proof of lemma 16 ###reference_ma16###, we start by getting a equation in terms of by summing up regret due to , , and . Then we solve the bound in term of to get our result.\n\u220e\nLet and . Then under the events and the following hold for every\nThe first inequality is based on the fact that . The second inequality holds per lemma 17 ###reference_ma17###. Since by definition is monotonically non-increasing in , is monotonically non-increasing in as well. Then we have\nWe complete our proof by dividing on both sides of the above inequality.\n\u220e\nUnder the event , the set of events hold.\nWe prove this by induction. First, . And . \nTo prove this result, we need to show if holds then also holds. If holds, we have following hold for every per lemma 18 ###reference_ma18###.\nPer the algorithm, we have\nIf , holds trivially by invoking induction. Also when , holds trivially. Thus, we only need to consider when . In this case,\nThe inequality holds based on the fact that is the optimal policy w.r.t. . From the induction assumption, holds, consequently, . Then we have\nFurther, for we have\nBy leveraging lemma 25 ###reference_ma25###, we have\nFor , we have\nThis inequality is based on lemma 18 ###reference_ma18###. Hence can be lower bounded by\nCombining all these proves . Thus, the event holds. The proof is completed by invoking induction from to .\n\u220e\nCombining lemma 6 ###reference_ma6###, lemma 19 ###reference_ma19### and lemma 16 ###reference_ma16###, we can complete the proof of theorem 1 ###reference_orem1###."
90
+ },
91
+ {
92
+ "section_id": "Appendix 4",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix D Exploration with Non-Markovian Rewards",
95
+ "text": "Suppose is the emprical transition matrix formed by sampling according to distribtution for samples, and when , then w.p. at least ,\nwhere .\nBy Azuma-Hoeffding\u2019s inequality, w.p. at least we can have:\nis the number of visits to state action pair under the distribution .\nWith Hoeffding\u2019s inequality, w.p. at least , we can have:\nwhich gives, w.p. at least :\nHence, we have:\nBy definition, ,\nwhich leads to,\nThe inequality is based on the fact that there is only state action pairs in total. When , we can have:\nCombining all these, we prove our result.\n\u220e\nWe define as the expected reward collected by trajectory , then for every ,\nwhere . Note that, for PRMs, represents the expectation of the reward of trajectory . However, for DRMs, is a deterministic quantity. Further, we have\nThe inequality is based on Holder\u2019s inequality, and the definition of leads to\nwhere can be further rewritten as\nHence,\nwhere . Consequently,\nHence, by replacing with , we complete our proof.\n\u220e\nWith lemma 2 ###reference_ma2###, we have\nWe let , we have,\nBy definition of insignificant state, we have:\nThe first inequality is based on the fact that for a fixed pair, . On the other hand, by Cauchy-Shwarts inequality, we have:\nBy preconditions, for any we always have\nwhich leads to:\nBy lemma 20 ###reference_ma20###, when , then w.p. at least ,\nCombining all equations above, we have\nChoose and , the proof is completed. \u220e\nWe denote the optimal policy for NMRDP and empirical NMRDP as and respectively, then the following holds\nwhere evaluation errors are bounded by by lemma 3 ###reference_ma3###, and optimization error is achieved by -approximate planner.\n\u220e\nBased on lemma 3 ###reference_ma3###, we need , consequently, . Since we need episodes for each observation, the total number of episodes of finding is , which gives second term of Theorem 2 ###reference_orem2###. We combine this with the result of lemma 3 ###reference_ma3###, which leads to the first term of Theorem 2 ###reference_orem2###. We complete our proof by considering the optimization error and policy evaluation error per lemma 4 ###reference_ma4###.\n\u220e"
96
+ },
97
+ {
98
+ "section_id": "Appendix 5",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix E Experiment",
101
+ "text": "In this section, we provide the implementation details of our experiments.\nTo speed up the computation, we apply the doubling trick originated from Auer et al. [2008 ###reference_b1###]. Instead of updating empirical transition matrix every episode, we update it after certain amount of observations. Specifically, whenever there is a pair whose visitation counts reach the power of , we start a new epoch, in which we recompute our empirical transition matrix (line 7 in algorithm 1 ###reference_###), empirical cross-product transition matrix , rewards (line 12 in algorithm 1 ###reference_###) and bonus function(line 13 in algorithm 1 ###reference_###). Then we calculate new function (line 14 in algorithm 1 ###reference_###). Finally we execute policy according to new Q function until there is there is a pair whose visitation counts reach the power of to start a new epoch. This approach greatly reduce the computation and won\u2019t affect the statistical efficiency.\nUCBVI-PRM, UCBVI, UCRL2-RM-L, and UCRL2-RM-B all apply the principle of optimism in the face of uncertainty, where the algorithms either adjust the reward functions (UCBVI-PRM and UCBVI) or modify their models (UCRL2-RM-L and UCRL2-RM-B) to balance exploration and exploitation. Specifically, UCBVI-PRM and UCBVI carefully design exploration bonuses to ensure that . In contrast, UCRL2-RM-L and UCRL2-RM-B construct a set of MDPs that likely contains the true MDP according to different concentration inequalities, then alter the model to be the best possible MDP within that set. However, due to the theoretical pessimism, these approaches often lead to over-exploration in practice, resulting in higher regret. To mitigate this, we tune the exploration coefficient of each algorithm to better balance exploration and exploitation in each environment, improving performance. For fairness, we select the optimal exploration coefficient for each algorithm from an equally large set of candidates.\nSpecifically, the exploration coefficient of UCBVI-PRM is defined as the empirical bonus used in the experiments divided by the theoretical bonus function calculated using Algorithm 2 ###reference_###. This modifies line 13 of Algorithm 1 ###reference_### to be: . UCBVI applies the same rule as UCBVI-PRM. For UCRL2-RM-L, the algorithm designs confidence sets for the transition function for every pair, such that the true dynamics lie within a confidence interval centered on the empirical mean . Formally, ,, where is the original parameter. The exploration coefficient for UCRL2-RM-B follows the same principle, with the only distinction being the confidence interval design. For more detailed implementation, please refer to our codes. For each algorithm, we choose parameters from an equally large set for each environment. Following is the table of candidates of exploration coefficient for every algorithm,\nAccording our results, all original algorithms over explore ( leads to better performance) in all of our experiments. Surprisingly, we find a fixed set of parameters perform the best out of other opponents across all experiment settings. Specifically, we end up choosing , , and for UCBVI-PRM, UCBVI, UCRL2-RM-L and UCRL2-RM-B, respectively. Note that, with smaller , UCRL2-RM-L and UCRL2-RM-B will fail in converging in Extended Value Iteration (EVI) [Auer et al., 2008 ###reference_b1###] in all of our experiments."
102
+ },
103
+ {
104
+ "section_id": "Appendix 6",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix F Technical Lemmas",
107
+ "text": "[Maurer and Pontil, 2009 ###reference_b21###]\nLet be i.i.d. random variables with values in and let . Define and . Then we have\n[Cesa-Bianchi and Lugosi, 2006 ###reference_b8###]\nLet be i.i.d. random variables with values in and let . Define and . Then we have\n[Cesa-Bianchi and Lugosi, 2006 ###reference_b8###]\nLet be a martingale difference sequence w.r.t. some filtration , , then for any , , we have\n[Freedman, 1975 ###reference_b13###]\nLet be a martingale difference sequence w.r.t. some filtration , . if the sum of the variances then for any , , and we have\n[Azar et al., 2017 ###reference_b2###]\nLet and be two random variables. Then the following bound holds for their variances"
108
+ }
109
+ ],
110
+ "tables": {
111
+ "1": {
112
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.101\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.101.101\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.101.101.102.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A1.101.101.102.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.101.101.102.1.1.1\">Symbol</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A1.101.101.102.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.101.101.102.1.2.1\">Explanation</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"A1.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.1.1.1.2\">The observation space</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.2.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.2.2.2.2\">The state space of PRM</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.3.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.4.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.5.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.5.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.5.5.5.2\">The action space</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.6.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.6.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.6.6.6.2\">The policy at episode k</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.8.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.7.7.7.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.8.8.8.2\">The transition function on \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.9.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.9.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.9.9.9.2\">The reward function</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.11.11.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.10.10.10.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.11.11.11.2\">The transition function on \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.12.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.12.12.12.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.12.12.12.2\">The transition function of PRM</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.13.13.13\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.13.13.13.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.13.13.13.2\">The reward function of PRM</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.14.14.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.14.14.14.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.14.14.14.2\">Labeling function</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.15.15.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.15.15.15.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.15.15.15.2\">Size of observation space</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.16.16.16\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.16.16.16.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.16.16.16.2\">Size of action space</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.17.17.17\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.17.17.17.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.17.17.17.2\">The horizon length</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.18.18.18\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.18.18.18.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.18.18.18.2\">Size of state space of PRM</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.21.21.21\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.20.20.20.2\">\n and \n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.21.21.21.3\">The total number of steps and number of steps up to episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.22.22.22\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.22.22.22.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.22.22.22.2\">The total number of episodes</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.25.25.25\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.23.23.23.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.25.25.25.3\">Number of visits to observation-action pair up to episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.27.27.27\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.26.26.26.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.27.27.27.2\">Optimal value function \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.30.30.30\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.28.28.28.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.30.30.30.3\">The estimate of value function at step of episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.32.32.32\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.31.31.31.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.32.32.32.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.34.34.34\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.33.33.33.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.34.34.34.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.37.37.37\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.35.35.35.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.37.37.37.3\">The estimate of action function at step of episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.38.38.38\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.38.38.38.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.38.38.38.2\">The exploration bonus</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.40.40.40\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.39.39.39.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.40.40.40.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.45.45.45\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.41.41.41.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.45.45.45.5\">Number of transitions from to after taking action up to episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.51.51.51\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.46.46.46.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.51.51.51.6\">Number of transitions from to after taking action at step up to episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.55.55.55\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.52.52.52.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.55.55.55.4\">Number of visits to observation at step up to episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.60.60.60\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.56.56.56.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.60.60.60.5\">The empirical transition distribution from to upon taking action of episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.63.63.63\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.61.61.61.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.63.63.63.3\">The empirical next-state variance of for every \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.66.66.66\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.64.64.64.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.66.66.66.3\">The next-state variance of for every state-action pair \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.70.70.70\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.67.67.67.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.70.70.70.4\">The empirical next-state variance of for every at episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.73.73.73\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.71.71.71.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.73.73.73.3\">The next-state variance of for every state-action pair \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.76.76.76\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.74.74.74.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.76.76.76.3\">The empirical next-observation variance of for every triplet \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.79.79.79\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.77.77.77.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.79.79.79.3\">The next-observation variance of for every triplet \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.83.83.83\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.80.80.80.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.83.83.83.4\">The empirical next-observation variance of for every triplet at episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.86.86.86\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.84.84.84.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.86.86.86.3\">The next-observation variance of for every triplet \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.88.88.88\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.87.87.87.1\">\n<span class=\"ltx_text\" id=\"A1.87.87.87.1.1\">Regret</span>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.88.88.88.2\">The regret after episodes</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.90.90.90\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.89.89.89.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.90.90.90.2\">The upper-bound regret after episodes</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.93.93.93\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.91.91.91.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.93.93.93.3\">One step regret at step of episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.96.96.96\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.94.94.94.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.96.96.96.3\">One step upper-bound regret at step of episode \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.97.97.97\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.97.97.97.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.97.97.97.2\">The high probability event under which the concentration inequalities hold</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.99.99.99\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"A1.98.98.98.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"A1.99.99.99.2\">The high probability event under which the are the upper bounds of optimal value function</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.101.101.101\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"A1.100.100.100.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A1.101.101.101.2\">The history of all random events up to time step \n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
113
+ "capture": "Table 1: Exploration Coefficient Candidates for Three Algorithms"
114
+ },
115
+ "2": {
116
+ "table_html": "<figure class=\"ltx_table\" id=\"A5.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A5.T1.25\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A5.T1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A5.T1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A5.T1.1.1.2.1\">Algorithm</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A5.T1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A5.T1.1.1.1.1\">Candidates of </span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A5.T1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A5.T1.7.7.7\"><span class=\"ltx_text ltx_font_typewriter\" id=\"A5.T1.7.7.7.1\">UCBVI-PRM</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A5.T1.7.7.6\">\n, , , , , \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T1.13.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A5.T1.13.13.7\"><span class=\"ltx_text ltx_font_typewriter\" id=\"A5.T1.13.13.7.1\">UCBVI</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T1.13.13.6\">\n, , , , , \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T1.19.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A5.T1.19.19.7\"><span class=\"ltx_text ltx_font_typewriter\" id=\"A5.T1.19.19.7.1\">UCRL2-RM-L</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A5.T1.19.19.6\">\n, ,, , , \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A5.T1.25.25\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A5.T1.25.25.7\"><span class=\"ltx_text ltx_font_typewriter\" id=\"A5.T1.25.25.7.1\">UCRL2-RM-B</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A5.T1.25.25.6\">\n, , , , , \n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A5.T1.27.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"A5.T1.28.2\" style=\"font-size:90%;\">Exploration Coefficient Candidates for Three Algorithms</span></figcaption>\n</figure>",
117
+ "capture": "Table 1: Exploration Coefficient Candidates for Three Algorithms"
118
+ }
119
+ },
120
+ "image_paths": {
121
+ "1(a)": {
122
+ "figure_path": "2408.10381v1_figure_1(a).png",
123
+ "caption": "(a) Warehouse environment\nFigure 1: The Warehouse example and the corresponding PRM. The robot needs to pick up an item and delivers the item to the right location in sequence when the item may not be ready and the delivery location could be busy. (a): a 4\u00d74444\\times 44 \u00d7 4 grid world in which is our robot, is the charging station, is the item pickup location, is the delivery location;(b): The corresponding PRM, where an edge q\u2192\ud835\udc5f\u2113\u2223pq\u2032\ud835\udc5fconditional\u2113\ud835\udc5d\u2192\ud835\udc5esuperscript\ud835\udc5e\u2032q\\xrightarrow[r]{\\ell\\mid p}q^{\\prime}italic_q start_ARROW underitalic_r start_ARROW start_OVERACCENT roman_\u2113 \u2223 italic_p end_OVERACCENT \u2192 end_ARROW end_ARROW italic_q start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT represents that state q\ud835\udc5eqitalic_q transitions to q\u2032superscript\ud835\udc5e\u2032q^{\\prime}italic_q start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT on event l\ud835\udc59litalic_l with probability p\ud835\udc5dpitalic_p and receives reward r\ud835\udc5fritalic_r.",
124
+ "url": "http://arxiv.org/html/2408.10381v1/x1.png"
125
+ },
126
+ "1(b)": {
127
+ "figure_path": "2408.10381v1_figure_1(b).png",
128
+ "caption": "(b) The Warehouse PRM\nFigure 1: The Warehouse example and the corresponding PRM. The robot needs to pick up an item and delivers the item to the right location in sequence when the item may not be ready and the delivery location could be busy. (a): a 4\u00d74444\\times 44 \u00d7 4 grid world in which is our robot, is the charging station, is the item pickup location, is the delivery location;(b): The corresponding PRM, where an edge q\u2192\ud835\udc5f\u2113\u2223pq\u2032\ud835\udc5fconditional\u2113\ud835\udc5d\u2192\ud835\udc5esuperscript\ud835\udc5e\u2032q\\xrightarrow[r]{\\ell\\mid p}q^{\\prime}italic_q start_ARROW underitalic_r start_ARROW start_OVERACCENT roman_\u2113 \u2223 italic_p end_OVERACCENT \u2192 end_ARROW end_ARROW italic_q start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT represents that state q\ud835\udc5eqitalic_q transitions to q\u2032superscript\ud835\udc5e\u2032q^{\\prime}italic_q start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT on event l\ud835\udc59litalic_l with probability p\ud835\udc5dpitalic_p and receives reward r\ud835\udc5fritalic_r.",
129
+ "url": "http://arxiv.org/html/2408.10381v1/x2.png"
130
+ },
131
+ "2(a)": {
132
+ "figure_path": "2408.10381v1_figure_2(a).png",
133
+ "caption": "(a) Labeled RiverSwim MDP\nFigure 2: The labeled RiverSwim and the corresponding DRM.",
134
+ "url": "http://arxiv.org/html/2408.10381v1/x11.png"
135
+ },
136
+ "2(b)": {
137
+ "figure_path": "2408.10381v1_figure_2(b).png",
138
+ "caption": "(b) The Patrol DRM\nFigure 2: The labeled RiverSwim and the corresponding DRM.",
139
+ "url": "http://arxiv.org/html/2408.10381v1/x12.png"
140
+ },
141
+ "3(a)": {
142
+ "figure_path": "2408.10381v1_figure_3(a).png",
143
+ "caption": "(a) : H=10\ud835\udc3b10H=10italic_H = 10, O=5\ud835\udc425O=5italic_O = 5\nFigure 3: Experimental results in RiverSwim",
144
+ "url": "http://arxiv.org/html/2408.10381v1/x13.png"
145
+ },
146
+ "3(b)": {
147
+ "figure_path": "2408.10381v1_figure_3(b).png",
148
+ "caption": "(b) : H=20\ud835\udc3b20H=20italic_H = 20, O=10\ud835\udc4210O=10italic_O = 10\nFigure 3: Experimental results in RiverSwim",
149
+ "url": "http://arxiv.org/html/2408.10381v1/x14.png"
150
+ },
151
+ "3(c)": {
152
+ "figure_path": "2408.10381v1_figure_3(c).png",
153
+ "caption": "(c) : H=30\ud835\udc3b30H=30italic_H = 30, O=15\ud835\udc4215O=15italic_O = 15\nFigure 3: Experimental results in RiverSwim",
154
+ "url": "http://arxiv.org/html/2408.10381v1/x15.png"
155
+ },
156
+ "4(a)": {
157
+ "figure_path": "2408.10381v1_figure_4(a).png",
158
+ "caption": "(a) : H=9\ud835\udc3b9H=9italic_H = 9, 3\u00d73333\\times 33 \u00d7 3 warehouse\nFigure 4: Experimental results in Warehouse",
159
+ "url": "http://arxiv.org/html/2408.10381v1/x16.png"
160
+ },
161
+ "4(b)": {
162
+ "figure_path": "2408.10381v1_figure_4(b).png",
163
+ "caption": "(b) : H=12\ud835\udc3b12H=12italic_H = 12, 4\u00d74444\\times 44 \u00d7 4 warehouse\nFigure 4: Experimental results in Warehouse",
164
+ "url": "http://arxiv.org/html/2408.10381v1/x17.png"
165
+ },
166
+ "4(c)": {
167
+ "figure_path": "2408.10381v1_figure_4(c).png",
168
+ "caption": "(c) : H=15\ud835\udc3b15H=15italic_H = 15, 5\u00d75555\\times 55 \u00d7 5 warehouse\nFigure 4: Experimental results in Warehouse",
169
+ "url": "http://arxiv.org/html/2408.10381v1/x18.png"
170
+ }
171
+ },
172
+ "validation": true,
173
+ "references": [
174
+ {
175
+ "1": {
176
+ "title": "Near-optimal regret bounds for reinforcement learning.",
177
+ "author": "Peter Auer, Thomas Jaksch, and Ronald Ortner.",
178
+ "venue": "Advances in neural information processing systems, 21, 2008.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "2": {
184
+ "title": "Minimax regret bounds for reinforcement learning, 2017.",
185
+ "author": "Mohammad Gheshlaghi Azar, Ian Osband, and R\u00e9mi Munos.",
186
+ "venue": null,
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "3": {
192
+ "title": "Rewarding behaviors.",
193
+ "author": "Fahiem Bacchus, Craig Boutilier, and Adam Grove.",
194
+ "venue": "In Proceedings of the National Conference on Artificial Intelligence, pages 1160\u20131167, 1996.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "4": {
200
+ "title": "Provable self-play algorithms for competitive reinforcement learning.",
201
+ "author": "Yu Bai and Chi Jin.",
202
+ "venue": "In International conference on machine learning, pages 551\u2013560. PMLR, 2020.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "5": {
208
+ "title": "Exploration in reward machines with low regret.",
209
+ "author": "Hippolyte Bourel, Anders Jonsson, Odalric-Ambrym Maillard, and Mohammad Sadegh Talebi.",
210
+ "venue": "In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent, editors, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, volume 206 of Proceedings of Machine Learning Research, pages 4114\u20134146. PMLR, 25\u201327 Apr 2023.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "6": {
216
+ "title": "Regret analysis of stochastic and nonstochastic multi-armed bandit problems, 2012.",
217
+ "author": "S\u00e9bastien Bubeck and Nicol\u00f2 Cesa-Bianchi.",
218
+ "venue": null,
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "7": {
224
+ "title": "Reward machines for vision-based robotic manipulation.",
225
+ "author": "Alberto Camacho, Jacob Varley, Andy Zeng, Deepali Jain, Atil Iscen, and Dmitry Kalashnikov.",
226
+ "venue": "In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 14284\u201314290. IEEE, 2021.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "8": {
232
+ "title": "Prediction, learning, and games.",
233
+ "author": "Nicolo Cesa-Bianchi and G\u00e1bor Lugosi.",
234
+ "venue": "Cambridge university press, 2006.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "9": {
240
+ "title": "Sample complexity of episodic fixed-horizon reinforcement learning.",
241
+ "author": "Christoph Dann and Emma Brunskill.",
242
+ "venue": "Advances in Neural Information Processing Systems, 28, 2015.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "10": {
248
+ "title": "Global reinforcement learning: Beyond linear and convex rewards via submodular semi-gradient methods.",
249
+ "author": "Riccardo De Santi, Manish Prajapat, and Andreas Krause.",
250
+ "venue": "arXiv preprint arXiv:2407.09905, 2024.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "11": {
256
+ "title": "Learning quadruped locomotion policies using logical rules.",
257
+ "author": "David DeFazio, Yohei Hayamizu, and Shiqi Zhang.",
258
+ "venue": "In Proceedings of the International Conference on Automated Planning and Scheduling, volume 34, pages 142\u2013150, 2024.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "12": {
264
+ "title": "Inferring probabilistic reward machines from non-markovian reward signals for reinforcement learning.",
265
+ "author": "Taylor Dohmen, Noah Topper, George Atia, Andre Beckus, Ashutosh Trivedi, and Alvaro Velasquez.",
266
+ "venue": "In Proceedings of the International Conference on Automated Planning and Scheduling, volume 32, pages 574\u2013582, 2022.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "13": {
272
+ "title": "On Tail Probabilities for Martingales.",
273
+ "author": "David A. Freedman.",
274
+ "venue": "The Annals of Probability, 3(1):100 \u2013 118, 1975.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "14": {
280
+ "title": "Minimax pac bounds on the sample complexity of reinforcement learning with a generative model.",
281
+ "author": "Mohammad Gheshlaghi Azar, R\u00e9mi Munos, and Hilbert J Kappen.",
282
+ "venue": "Machine learning, 91:325\u2013349, 2013.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "15": {
288
+ "title": "Decentralized graph-based multi-agent reinforcement learning using reward machines.",
289
+ "author": "Jueming Hu, Zhe Xu, Weichang Wang, Guannan Qu, Yutian Pang, and Yongming Liu.",
290
+ "venue": "Neurocomputing, 564:126974, 2024.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "16": {
296
+ "title": "Using reward machines for high-level task specification and decomposition in reinforcement learning.",
297
+ "author": "Rodrigo Toro Icarte, Toryn Klassen, Richard Valenzano, and Sheila McIlraith.",
298
+ "venue": "In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2107\u20132116. PMLR, 10\u201315 Jul 2018.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "17": {
304
+ "title": "Reward machines: Exploiting reward function structure in reinforcement learning.",
305
+ "author": "Rodrigo Toro Icarte, Toryn Q Klassen, Richard Valenzano, and Sheila A McIlraith.",
306
+ "venue": "Journal of Artificial Intelligence Research, 73:173\u2013208, 2022.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "18": {
312
+ "title": "Reward-free exploration for reinforcement learning, 2020.",
313
+ "author": "Chi Jin, Akshay Krishnamurthy, Max Simchowitz, and Tiancheng Yu.",
314
+ "venue": null,
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "19": {
320
+ "title": "Reinforcement learning with temporal logic rewards.",
321
+ "author": "Xiao Li, Cristian-Ioan Vasile, and Calin Belta.",
322
+ "venue": "In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3834\u20133839. IEEE, 2017.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "20": {
328
+ "title": "A sharp analysis of model-based reinforcement learning with self-play.",
329
+ "author": "Qinghua Liu, Tiancheng Yu, Yu Bai, and Chi Jin.",
330
+ "venue": "In International Conference on Machine Learning, pages 7001\u20137010. PMLR, 2021.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "21": {
336
+ "title": "Empirical bernstein bounds and sample variance penalization, 2009.",
337
+ "author": "Andreas Maurer and Massimiliano Pontil.",
338
+ "venue": null,
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "22": {
344
+ "title": "Fast active learning for pure exploration in reinforcement learning.",
345
+ "author": "Pierre M\u00e9nard, Omar Darwiche Domingues, Anders Jonsson, Emilie Kaufmann, Edouard Leurent, and Michal Valko.",
346
+ "venue": "In International Conference on Machine Learning, pages 7599\u20137608. PMLR, 2021.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "23": {
352
+ "title": "Reward machines for cooperative multi-agent reinforcement learning.",
353
+ "author": "Cyrus Neary, Zhe Xu, Bo Wu, and Ufuk Topcu.",
354
+ "venue": "arXiv preprint arXiv:2007.01962, 2020.",
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "24": {
360
+ "title": "Submodular reinforcement learning.",
361
+ "author": "Manish Prajapat, Mojm\u00edr Mutn\u1ef3, Melanie N Zeilinger, and Andreas Krause.",
362
+ "venue": "arXiv preprint arXiv:2307.13372, 2023.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "25": {
368
+ "title": "On reward-free rl with kernel and neural function approximations: Single-agent mdp and markov game.",
369
+ "author": "Shuang Qiu, Jieping Ye, Zhaoran Wang, and Zhuoran Yang.",
370
+ "venue": "In International Conference on Machine Learning, pages 8737\u20138747. PMLR, 2021.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "26": {
376
+ "title": "A learning based approach to control synthesis of markov decision processes for linear temporal logic specifications.",
377
+ "author": "Dorsa Sadigh, Eric S Kim, Samuel Coogan, S Shankar Sastry, and Sanjit A Seshia.",
378
+ "venue": "In 53rd IEEE Conference on Decision and Control, pages 1091\u20131096. IEEE, 2014.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "27": {
384
+ "title": "Reinforcement learning: An introduction.",
385
+ "author": "Richard S Sutton and Andrew G Barto.",
386
+ "venue": "MIT press, 2018.",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "28": {
392
+ "title": "Reward-free rl is no harder than reward-aware rl in linear markov decision processes.",
393
+ "author": "Andrew J Wagenmaker, Yifang Chen, Max Simchowitz, Simon Du, and Kevin Jamieson.",
394
+ "venue": "In International Conference on Machine Learning, pages 22430\u201322456. PMLR, 2022.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "29": {
400
+ "title": "On reward-free reinforcement learning with linear function approximation.",
401
+ "author": "Ruosong Wang, Simon S Du, Lin Yang, and Russ R Salakhutdinov.",
402
+ "venue": "Advances in neural information processing systems, 33:17816\u201317826, 2020.",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "30": {
408
+ "title": "Correct-by-synthesis reinforcement learning with temporal logic constraints.",
409
+ "author": "Min Wen, R\u00fcdiger Ehlers, and Ufuk Topcu.",
410
+ "venue": "In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4983\u20134990. IEEE, 2015.",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "31": {
416
+ "title": "Joint inference of reward machines and policies for reinforcement learning, 2022.",
417
+ "author": "Zhe Xu, Ivan Gavran, Yousef Ahmad, Rupak Majumdar, Daniel Neider, Ufuk Topcu, and Bo Wu.",
418
+ "venue": null,
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "32": {
424
+ "title": "Tighter problem-dependent regret bounds in reinforcement learning without domain knowledge using value function bounds.",
425
+ "author": "Andrea Zanette and Emma Brunskill.",
426
+ "venue": "In International Conference on Machine Learning, pages 7304\u20137312. PMLR, 2019.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "33": {
432
+ "title": "Reward-free model-based reinforcement learning with linear function approximation.",
433
+ "author": "Weitong Zhang, Dongruo Zhou, and Quanquan Gu.",
434
+ "venue": "Advances in Neural Information Processing Systems, 34:1582\u20131593, 2021a.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "34": {
440
+ "title": "Task-agnostic exploration in reinforcement learning.",
441
+ "author": "Xuezhou Zhang, Yuzhe Ma, and Adish Singla.",
442
+ "venue": "Advances in Neural Information Processing Systems, 33:11734\u201311743, 2020.",
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "35": {
448
+ "title": "Is reinforcement learning more difficult than bandits? a near-optimal algorithm escaping the curse of horizon.",
449
+ "author": "Zihan Zhang, Xiangyang Ji, and Simon Du.",
450
+ "venue": "In Conference on Learning Theory, pages 4528\u20134531. PMLR, 2021b.",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "36": {
456
+ "title": "Multi-agent reinforcement learning with a hierarchy of reward machines.",
457
+ "author": "Xuejing Zheng and Chao Yu.",
458
+ "venue": "arXiv preprint arXiv:2403.07005, 2024.",
459
+ "url": null
460
+ }
461
+ }
462
+ ],
463
+ "url": "http://arxiv.org/html/2408.10381v1"
464
+ }
20241127/2105.02653v3.json ADDED
@@ -0,0 +1,447 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Regularizing Explanations in Bayesian Convolutional Neural Networks",
3
+ "abstract": "Neural networks are powerful function approximators with tremendous potential in learning complex distributions. However, they are prone to overfitting on spurious patterns. Bayesian inference provides a principled way to regularize neural networks and give well-calibrated uncertainty estimates. It allows us to specify prior knowledge on weights. However, specifying domain knowledge via distributions over weights is infeasible. Furthermore, it is unable to correct models when they focus on spurious or irrelevant features. New methods within explainable artificial intelligence allow us to regularize explanations in the form of feature importance to add domain knowledge and correct the models\u2019 focus. Nevertheless, they are incompatible with Bayesian neural networks, as they require us to modify the loss function. We propose a new explanation regularization method that is compatible with Bayesian inference. Consequently, we can quantify uncertainty and, at the same time, have correct explanations. We test our method using four different datasets. The results show that our method improves predictive performance when models overfit on spurious features or are uncertain of which features to focus on. Moreover, our method performs better than augmenting training data with samples where spurious features are removed through masking. We provide code, data, trained weights, and hyperparameters.111https://github.com/observer4599/explanation-regularization-in-bnn",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "nn have in recent years shown high performance and been successful in many applications (Goodfellow et al., 2016 ###reference_b1###; Silver et al., 2018 ###reference_b2###; Esteva et al., 2019 ###reference_b3###; Kiran et al., 2022 ###reference_b4###). However, they can overfit on spurious features in training datasets and lose the ability to generalize (Szegedy et al., 2014 ###reference_b5###; Lapuschkin et al., 2019 ###reference_b6###). Furthermore, we understand how they work computationally, but are unable to extract high-level insights that make humans understand and trust them (Arrieta et al., 2020 ###reference_b7###).\nTo prevent overfitting, we use regularization techniques like weight regularization, dropout, early stopping, and explanation regularization (Ross et al., 2017 ###reference_b8###). A probabilistic approach to regularizing neural networks is to leverage Bayesian inference (Blundell et al., 2015 ###reference_b9###; Jospin et al., 2022 ###reference_b10###). In Bayesian NNs, we find the posterior distribution on weights rather than point estimates. To find the posterior distribution, we define a prior distribution on weights that moves them towards our preferred choices. As the amount of data increases, the prior weighs less (Blundell et al., 2015 ###reference_b9###; Prince, 2023 ###reference_b11###). Although Bayesian inference gives us well-calibrated uncertainty estimates, this principled way to regularize NNs is incompatible with newer methods that regularize explanations. Explanation regularization came as a response to the need of explainable NNs (Ross et al., 2017 ###reference_b8###; Teso and Kersting, 2019 ###reference_b12###; Rieger et al., 2020 ###reference_b13###). In explanation regularization, we have annotated masks that we refer to as explanation feedback. They indicate areas in the input space irrelevant for predictions, which is seen in Fig. 1 ###reference_###. Furthermore, Bayesian inference regularizes the model via prior on weights. However, it is unable to say anything regarding the input space. In contrast, explanation regularization enables us to add domain knowledge in the input space to regularize NNs\u2019 explanations, in the form of saliency maps. The ability to add domain knowledge in the input space, in turn, can make the models focus on the right features.\nOur method provides a way to regularize explanations that is compatible with Bayesian convolutional neural networks. By merging Bayesian inference and our explanation regularization method, we introduce NNs with correctly calibrated uncertainty through a principled way and correct explanations that previous approaches have not been able to provide. Experimentally, we demonstrate that our method makes models perform better when they overfit to spurious features that a user can indicate in the input space. Furthermore, it can improve model performance when the model is uncertain on what to look at. We also show that our approach is more versatile than augmenting training data with samples where spurious features are masked.\nTo summarize: 1) we propose a new explanation regularization method compatible with Bayesian CNNs that provides well calibrated uncertainty estimate in a principled way. 2) We test our method on four different datasets with and without spurious features. 3) Experiments demonstrate that our method makes models perform better when they overfit to spurious features or are uncertain about which parts of the input to focus on.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Background",
15
+ "text": "We introduce the background on Bayesian NN (Prince, 2023 ###reference_b11###; Murphy, 2023 ###reference_b16###) and the local reparameterization trick (LRT) (Kingma et al., 2015 ###reference_b17###) that our method relies on. The loss function introduced in this section will be used in Section 4 ###reference_###."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Bayesian Neural Network",
21
+ "text": "In NNs, we learn the weights via maximum likelihood estimation. Given a dataset with samples, we optimize the objective defined by assuming that the samples are independent and identically distributed. There are several choices of regularization, one is to use the maximum a posteriori estimation defined by , where moves the weights towards the choices we prefer to prevent overfitting. is referred to as the prior, and reflects our prior belief of what the weights should be before seeing the data. The prior imposes L1 or L2 regularization depending on if it is Laplace or Gaussian respectively.\nBoth maximum likelihood estimation and maximum a posterior estimation focus on finding point estimates of the weights. In Bayesian NNs, we represent weights as probability distributions and not as point estimates. To compute the full distribution requires us to compute the integral , which is infeasible. A way to solve this is to use variational inference (VI) (Blei et al., 2017 ###reference_b18###) and minimize the Kullback\u2013Leibler (KL) divergence , where is the variational distribution and is the posterior distribution (Blundell et al., 2015 ###reference_b9###). We cannot minimize the KL divergence directly, but we can solve the optimization problem for a lower bound on the evidence that is independent of the distribution parameters . The lower bound is known as the evidence lower bound (ELBO) and defined by\nThe objective maximizes the log likelihood of the data like in the maximum likelihood estimation. It is important to note that we are maximizing with respect to the distribution parameters and not the weights themselves like in maximum likelihood estimation where we treated them as point estimates. The objective additionally minimizes the KL divergence between the variational distribution and the prior distribution, moving the probability mass towards our choice of weights. The objective has to trade off between these two quantities, but as the amount of data increases, the likelihood term will weigh more.\nTo optimize Eq. 1 ###reference_###, we use stochastic gradient descent with the reparameterization trick (Kingma and Welling, 2014 ###reference_b19###; Blundell et al., 2015 ###reference_b9###). We model the variational distribution with a fully factorized Gaussian distribution defined by using the mean field approximation. To sample weights, we first sample noise , thereafter compute for independently. By using the reparameterization trick, we can update the parameters using backpropagation. The loss function we optimize in a Bayesian CNN using minibatches is defined by\nwhere is the number of minibatches, is the number of samples in our dataset and the number of Monte Carlo samples. We use fully factorized Gaussians for both the variational distribution and the prior distribution so that the KL divergence term can be solved in closed form (Kingma and Welling, 2014 ###reference_b19###)."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Local Reparameterization Trick",
27
+ "text": "To reduce variance of Eq. 2 ###reference_###, Kingma et al. (2015 ###reference_b17###) propose the local reparameterization trick (LRT). Instead of sampling weights as in Eq. 2 ###reference_###, LRT samples activations. Thus, the uncertainty is moved from weights that affect all samples to activations that is local and sample dependent. The LRT loss function is defined by\nwhere we sample activations rather than weights. We omit in the condition as no extra information is added given that we know the activations. Do note that we do need to know in the first place to compute the activations. We show how these activations are sampled in fully connected layers and in convolutional layers (Kingma et al., 2015 ###reference_b17###; Molchanov et al., 2017 ###reference_b20###).\nFully Connected Layer. Assume that the input to a layer is , to compute the activation, we compute the mean and variance of the activation defined by and . Thus, the distribution on the activation is and can be sampled as shown in Section 2.1 ###reference_###.\nConvolutional Layer. Assume that the input to a layer is and the weights is also a matrix. We assume only a single feature map to simplify the calculations. The mean and variance are defined by and where is the convolution operator and is applied element-wise. The distribution on activations is then and the reparameterization trick can be used to sample activations."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Related Work",
33
+ "text": "xai aims to assist humans understand artificial intelligence systems, their strength and weaknesses, provide understanding of how they will perform in unknown situations (Gunning and Aha, 2019 ###reference_b21###). Methods to understand machine learning models are often divided into interpretable models and post hoc explainability (Lipton, 2018 ###reference_b22###; Arrieta et al., 2020 ###reference_b7###; Murphy, 2023 ###reference_b16###). Our method goes under post hoc explainability methods that are applied to models after training. The method we propose is related to a line of work that corrects or prevents models to look at spurious features. As far as we know, Ross et al. (2017 ###reference_b8###) introduced the first method to correct and prevent models to look at spurious features in the context of explainable artificial intelligence (XAI). To prevent models from learning spurious features, Ross et al. (2017 ###reference_b8###) regularizes the input gradient in area specified by an explanation feedback. That is, they minimize the norm of the input gradient in the region that is specified to be irrelevant by the user. Liu and Avci (2019 ###reference_b23###) use a similar approach to Ross et al. (2017 ###reference_b8###) in text classification to make a model focus less on certain words. Similarly, working on text, Du et al. (2019 ###reference_b24###) encourages sparse importance values on irrelevant features and that the models should be uncertain when important features are removed. Rieger et al. (2020 ###reference_b13###) regularize explanations leveraging the method contextual decomposition explanation penalization. This allows them to penalize both feature importance and interaction. Shao et al. (2021 ###reference_b25###) regularize explanations using influence functions and show that it is better than using input gradients. Erion et al. (2021 ###reference_b26###) regularizes explanations by specifying domain knowledge regarding how explanations should be before training. For example, the total variation between feature importance values for pixels in image data should be low. Like the abovementioned methods, Selvaraju et al. (2019 ###reference_b27###) propose a new loss function to align human feedback on important features and where models look. Common to all these approaches is that they modify the loss function by augmenting it with additional terms. This, however, makes it impossible to minimize ELBO as the loss function is modified and augmented with new terms. We instead introduce a simple approach levering LRT to add explanation feedback to prevent models to look at irrelevant features and add domain knowledge.\nDifferently from previously mentioned methods, Schramowski et al. (2020 ###reference_b28###); Teso and Kersting (2019 ###reference_b12###) propose a model agnostic approach to regularize explanations by augmenting the training dataset with counterexamples. These counterexamples are the same as the samples in the training dataset, but where spurious features have been modified. These modifications can be replacing spurious features with random noise or use feature values from other samples without spurious features. We show in the experiments that it is less effective than our approach since location dependent spurious features cannot be removed. Furthermore, sometimes background information can be a positive influence, but this method does not allow partial use of features by models. Lastly, creating counterexamples introduces out-of-distribution samples into the training dataset that can negatively affect training.\n###figure_2###"
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Method",
39
+ "text": "We detail our method by first setting up the model and dataset assumptions. Afterward, we detail how to regularize explanations in Bayesian CNN using our method. We assume that we have a Bayesian CNN represented by . Furthermore, we assume access to a dataset where is an input image and is a target label. is the set of real numbers if it is a regression task or a set of class labels for classification. is an explanation feedback. A value of in indicates an area of where the NN should not focus on when predicting . A value of points at an area where no feedback is given, that is, it does not matter what the model does in that region.\nWe showed in Section 2.2 ###reference_### that training a Bayesian CNN with LRT amounts to minimize Eq. 3 ###reference_###. To regularize explanations implies regularizing the input gradients (Ross et al., 2017 ###reference_b8###) or some other quantity (Rieger et al., 2020 ###reference_b13###; Selvaraju et al., 2019 ###reference_b27###). But to regularize input gradient without changing the objective, we need to know the distribution on input gradients, which we do not know. Instead, we leverage activation outputted from convolutional layers to incorporate the explanation feedback to regularize explanations. To show how our method works, we take the objective in Eq. 3 ###reference_### and show how the likelihood term is computed to incorporate explanation feedback.\nWe incorporate the explanation feedback via the last convolutional layer in a Bayesian CNN. We downsize the explanation feedback to the size of the activation produced by the last convolutional layer, as seen in Fig. 2 ###reference_### using a function . In practice, the function is implemented using torch.nn.AdaptiveMaxPool2d (Ansel et al., 2024 ###reference_b29###). Then we set the evidence of activation overlapping with \u2019s in the explanation feedback to . We denote those activations that the explanation feedback indicates are unimportant as , while the rest of the activations in the network as . When we refer to all activations in the network, we simply write . The log likelihood term with explanation feedback added is defined by\nBecause the size of the explanation feedback is larger than the prediction output, we introduce a hyperparameter to lower the importance of in Eq. 4 ###reference_### and set it to . Note that we still minimize Eq. 3 ###reference_### but add explanation feedback using activations as seen in Fig. 2 ###reference_### via the likelihood term as shown in Eq. 4 ###reference_###."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Experiments",
45
+ "text": "We first detail our experimental setup, including the datasets used, model architectures, and additional details. Afterward, we show how our model improves the predictive performance while minimizing the models\u2019 focus on spurious features."
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "Experimental Setup",
51
+ "text": "To test the performance of our method, we use four different datasets. All of the datasets except for the ISIC skin cancer dataset were downloaded via torchvision.datasets (Ansel et al., 2024 ###reference_b29###).\nDatasets. We create two versions of Decoy MNIST (Ross et al., 2017 ###reference_b8###) which builds on The MNIST database of handwritten digits (LeCun et al., 2010 ###reference_b30###). The MNIST dataset consists of black and white images of digits from 0 to 9. The Decoy MNIST dataset adds decoys at the corners and sides of input samples as seen in Fig. 3(c) ###reference_sf3###. In the first version that we name \u201ccolor\u201d, the grayscale of decoys in the training has pixel intensity where is the class label. In the test dataset, is randomly sampled from the set of class labels. The location of the decoy is randomly placed both in the training and test dataset. In the other version called \u201clocation\u201d, the location of the decoys follows the class label in the training dataset but is random in the testing dataset. The grayscale intensity is randomly drawn both for the training and testing datasets. The ISIC dataset is a dataset for skin cancer diagnosis (Codella et al., 2019 ###reference_b14###; Tschandl et al., 2018 ###reference_b15###). We utilize only two classes, benign and malignant. We increase the importance of the malignant class in the loss because the dataset is heavily imbalanced. The version of ISIC dataset we use is curated by using code from Rieger et al. (2020 ###reference_b13###). The explanation feedback we used is also from Rieger et al. (2020 ###reference_b13###). Oxford-IIIT-Pet (Parkhi et al., 2012 ###reference_b31###) consists of cat and dog images with 37 different classes of different cat and dog breeds. The semantic boundaries dataset (SBD) (Hariharan et al., 2011 ###reference_b32###) dataset consists of images from the PASCAL VOC 2011 dataset (Everingham et al., 2011 ###reference_b33###). For the SBD, we use a subset of classes: bird, bus, cat, dog, horse by following Schramowski et al. (2020 ###reference_b28###). We only use samples where one and only one of these classes appears.\nModels. We use the LeNet-5222https://pytorch.org/tutorials/beginner/introyt/introyt1_tutorial.html#pytorch-models ###reference_royt/introyt1_tutorial.html#pytorch-models### (LeCun et al., 1998 ###reference_b34###) model for the decoy MNIST datasets and AlexNet (Krizhevsky et al., 2012 ###reference_b35###) for the other datasets. We load pretrained weights from PyTorch for AlexNet333https://pytorch.org/hub/pytorch_vision_alexnet/ ###reference_xnet/###.\nSoftware and Hardware. We used PyTorch Lightning to do the experiments (Falcon and team, 2024 ###reference_b36###). The experiments ran on a MacBook Pro 2023 with Apple M2 Max chip and 64 GB RAM. We used the MPS backend for GPU accelerated training. The metrics we compute are calculated using scikit-learn (Pedregosa et al., 2011 ###reference_b37###). The saliency maps are created using Captum (Kokhlikyan et al., 2020 ###reference_b38###)."
52
+ },
53
+ {
54
+ "section_id": "5.2",
55
+ "parent_section_id": "5",
56
+ "section_name": "Predictive Performance",
57
+ "text": "We compare the predictive performance of Bayesian CNNs without any feedback, using data argumentation with counterexamples (Schramowski et al., 2020 ###reference_b28###; Teso and Kersting, 2019 ###reference_b12###), and the method outlined above. The data augmentation approach is, as far as we know, the only approach compatible with Bayesian CNNs because it is model agnostic. For this approach, we first replace a region specified to be irrelevant by the explanation feedback with noise sampled from a uniform distribution on the interval and afterward, we preprocess the images with standardization. We only use of the explanation feedback available, since regularizing all training samples negatively impacts our method in some instances.\nWe observed during the experiments that there are no performance gains when we apply our method to models that are not focusing on spurious features or when models are not uncertain. That is, if we initialized weights with small variance we could not see performance gain in the datasets without spurious features because pretrained AlexNet weights from PyTorch are already near optimal for the model architecture. Instead, we want to demonstrate our method under the conditions that there are spurious features or when the models are uncertain by initializing with larger variance and compare it to the data augmentation method. Tables 1 ###reference_### and 2 ###reference_### indicate that our method can improve model performance when models have overfitted to spurious features or the model is uncertain. The sample standard deviations shown in Tables 1 ###reference_### and 2 ###reference_### are computed by training three models using a 3-fold cross-validation and testing the three models on an independent test dataset.\nFor the data augmentation method, we see that the method can affect results negatively when the models are not overfitting to spurious features but still uncertain. This indicates that background information can be useful, but since the information is removed entirely, the models cannot take advantage of it. While our method tries to tell the models where to not look, we do not remove the information entirely and can use the hyperparameter to balance this aspect.\n###figure_3### ###figure_4### ###figure_5### ###figure_6### Dataset\nNo Regularization\nOur Method\nData Augmentation\n\n\n\n\nBalanced Accuracy \nF1 \nBalanced Accuracy \nF1 \nBalanced Accuracy \nF1 \n\nDecoy MNIST Color\n\n\n\n\n\\B\n\\B\n\nDecoy MNIST Position\n\n\n\\B\n\\B\n\n\n\n\\Acisic\n\n\n\n\\B\n\\B\n\n\n\\Acisic (No Patch Data)\n\n\n\\B\n\\B\n\n\n\nOxford-IIIT-Pet\n\n\n\\B\n\\B\n\n\n\n\\Acsbd\n\n\n\\B\n\\B\nDataset\nNo Regularization\nOur Method\nData Augmentation\n\n\n\n\nAUC \nOverlap \nAUC \nOverlap \nAUC \nOverlap \n\nDecoy MNIST Color\n\n\n\\B\n\n\\B\n\\B\n\nDecoy MNIST Position\n\n\n\\B\n\\B\n\n\n\n\\Acisic\n\\B\n\n.001)\n\\B\n\n\n\n\\Acisic (No Patch Data)\n\nn/a\n\\B\nn/a\n\\B\nn/a\n\nOxford-IIIT-Pet\n\\B\n\\B\n\n\n\n\n\n\\Acsbd\n\n\n\\B\n\\B"
58
+ },
59
+ {
60
+ "section_id": "5.3",
61
+ "parent_section_id": "5",
62
+ "section_name": "Model Focus",
63
+ "text": "Table 2 ###reference_### demonstrate that our method is good at removing the models\u2019 focus on spurious features. The overlap is computed by calculating how much importance is on the area the explanation feedbacks indicate as unimportant divided by the total amount of importance across the entire image. To do the overlap calculation, we use input gradient (Simonyan et al., 2014 ###reference_b39###) for the MNIST dataset and we used Grad-CAM (Selvaraju et al., 2017 ###reference_b40###) for the rest of the datasets. Figs. 3(c) ###reference_sf3###, 3(a) ###reference_sf1### and 3(b) ###reference_sf2### show that our method can guide models away from spurious features and focus on what is important. For ISIC, data augmentation replace irrelevant regions with random noise but seems to be unable to make the models not look at patches. This indicates that when the location of features matter and not only their appearance, then counterexamples are unable to change model focus."
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusion and Discussion",
69
+ "text": "We have introduced a new explanation regularization methods that is compatible with the Bayesian formalism. Our focus has been to introduce a method that can be used with Bayesian CNNs and not compete with methods trying to improve model focus on regular NNs. Beyond this, we provide the opportunity to add domain knowledge in the input space. The experiments across four datasets show that our method can improve predictive performance of Bayesian CNNs when they overfit to spurious features or are uncertain where to focus. Moreover, we can remove focus on spurious features, no matter if it is because of appearance or their location.\nWhile our method is simple, it has limitations. Like other explanation regularization methods, our method requires human labor to specify explanation feedback that can be labor-intensive. In the future, intelligent ways to obtain explanation feedback should be considered. We regularize across all channels in a region in the convolutional layers, which can potentially be undesirable. We should for future work investigate adaptive methods to intelligently select specific filters to regularize."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Predictive performance across four datasets with different variations of Decoy MNIST and <abbr class=\"ltx_glossaryref\" title=\"International Skin Imaging Collaboration\"><span class=\"ltx_text ltx_glossary_short\">ISIC</span></abbr>. For datasets with more than two classes, we compute macro-averaged F1 score.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T1.42\" style=\"width:667.1pt;height:145pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S5.T1.42.42\"><span class=\"ltx_text\" id=\"S5.T1.42.42.42\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.42.42.42.42\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S5.T1.42.42.42.42.43.1\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.42.42.42.42.43.1.1\">Dataset</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S5.T1.42.42.42.42.43.1.2\">No Regularization</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S5.T1.42.42.42.42.43.1.3\">Our Method</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S5.T1.42.42.42.42.43.1.4\">Data Augmentation</span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S5.T1.6.6.6.6.6\">\n<span class=\"ltx_td\" id=\"S5.T1.6.6.6.6.6.7\"></span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.1.1.1.1.1.1\">Balanced Accuracy </span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.2.2.2.2.2.2\">F1 </span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.3.3.3.3.3.3\">Balanced Accuracy </span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.4.4.4.4.4.4\">F1 </span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.5.5.5.5.5.5\">Balanced Accuracy </span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T1.6.6.6.6.6.6\">F1 </span></span>\n<span class=\"ltx_tr\" id=\"S5.T1.12.12.12.12.12\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.12.12.12.12.12.7\">Decoy MNIST Color</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.7.7.7.7.7.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.8.8.8.8.8.2\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.9.9.9.9.9.3\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.10.10.10.10.10.4\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.11.11.11.11.11.5\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.11.11.11.11.11.5.1\">\\B</span></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T1.12.12.12.12.12.6\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.12.12.12.12.12.6.1\">\\B</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T1.18.18.18.18.18\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.18.18.18.18.18.7\">Decoy MNIST Position</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.13.13.13.13.13.1\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.14.14.14.14.14.2\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.15.15.15.15.15.3\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.15.15.15.15.15.3.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.16.16.16.16.16.4\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.16.16.16.16.16.4.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.17.17.17.17.17.5\"></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.18.18.18.18.18.6\"></span></span>\n<span class=\"ltx_tr\" id=\"S5.T1.24.24.24.24.24\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.24.24.24.24.24.7\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.24.24.24.24.24.7.1\">\\Ac</span>isic</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.19.19.19.19.19.1\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.20.20.20.20.20.2\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.21.21.21.21.21.3\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.22.22.22.22.22.4\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.22.22.22.22.22.4.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.23.23.23.23.23.5\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.23.23.23.23.23.5.1\">\\B</span></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.24.24.24.24.24.6\"></span></span>\n<span class=\"ltx_tr\" id=\"S5.T1.30.30.30.30.30\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.30.30.30.30.30.7\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.30.30.30.30.30.7.1\">\\Ac</span>isic (No Patch Data)</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.25.25.25.25.25.1\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.26.26.26.26.26.2\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.27.27.27.27.27.3\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.27.27.27.27.27.3.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.28.28.28.28.28.4\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.28.28.28.28.28.4.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.29.29.29.29.29.5\"></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.30.30.30.30.30.6\"></span></span>\n<span class=\"ltx_tr\" id=\"S5.T1.36.36.36.36.36\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.36.36.36.36.36.7\">Oxford-IIIT-Pet</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.31.31.31.31.31.1\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.32.32.32.32.32.2\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.33.33.33.33.33.3\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.33.33.33.33.33.3.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.34.34.34.34.34.4\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.34.34.34.34.34.4.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T1.35.35.35.35.35.5\"></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.36.36.36.36.36.6\"></span></span>\n<span class=\"ltx_tr\" id=\"S5.T1.42.42.42.42.42\">\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.42.42.42.42.42.7\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.42.42.42.42.42.7.1\">\\Ac</span>sbd</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.37.37.37.37.37.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.38.38.38.38.38.2\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.39.39.39.39.39.3\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.39.39.39.39.39.3.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.40.40.40.40.40.4\"><span class=\"ltx_ERROR undefined\" id=\"S5.T1.40.40.40.40.40.4.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.41.41.41.41.41.5\"></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S5.T1.42.42.42.42.42.6\"></span></span>\n</span>\n</span></span></p>\n</span></div>\n</figure>",
76
+ "capture": "Table 1: Predictive performance across four datasets with different variations of Decoy MNIST and ISIC. For datasets with more than two classes, we compute macro-averaged F1 score."
77
+ },
78
+ "2": {
79
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>For dataset with more than two classes, we compute one-vs-rest to get the AUC scores. To compute overlap, we use input gradient for Decoy MNIST and Grad-CAM for the rest of the datasets. Some entries are missing standard deviation, since it is less than .</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.41\" style=\"width:604.9pt;height:145pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<p class=\"ltx_p\" id=\"S5.T2.41.39\"><span class=\"ltx_text\" id=\"S5.T2.41.39.39\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.41.39.39.39\">\n<span class=\"ltx_thead\">\n<span class=\"ltx_tr\" id=\"S5.T2.41.39.39.39.40.1\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.41.39.39.39.40.1.1\">Dataset</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S5.T2.41.39.39.39.40.1.2\">No Regularization</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S5.T2.41.39.39.39.40.1.3\">Our Method</span>\n<span class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S5.T2.41.39.39.39.40.1.4\">Data Augmentation</span></span>\n</span>\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S5.T2.8.6.6.6.6\">\n<span class=\"ltx_td\" id=\"S5.T2.8.6.6.6.6.7\"></span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T2.3.1.1.1.1.1\">AUC </span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T2.4.2.2.2.2.2\">Overlap </span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T2.5.3.3.3.3.3\">AUC </span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T2.6.4.4.4.4.4\">Overlap </span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T2.7.5.5.5.5.5\">AUC </span>\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_column\" id=\"S5.T2.8.6.6.6.6.6\">Overlap </span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.14.12.12.12.12\">\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.14.12.12.12.12.7\">Decoy MNIST Color</span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.9.7.7.7.7.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.10.8.8.8.8.2\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.11.9.9.9.9.3\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.11.9.9.9.9.3.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.12.10.10.10.10.4\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.13.11.11.11.11.5\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.13.11.11.11.11.5.1\">\\B</span></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"S5.T2.14.12.12.12.12.6\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.14.12.12.12.12.6.1\">\\B</span></span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.20.18.18.18.18\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.20.18.18.18.18.7\">Decoy MNIST Position</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.15.13.13.13.13.1\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.16.14.14.14.14.2\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.17.15.15.15.15.3\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.17.15.15.15.15.3.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.18.16.16.16.16.4\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.18.16.16.16.16.4.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.19.17.17.17.17.5\"></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T2.20.18.18.18.18.6\"></span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.26.24.24.24.24\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.26.24.24.24.24.7\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.26.24.24.24.24.7.1\">\\Ac</span>isic</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.21.19.19.19.19.1\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.21.19.19.19.19.1.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.22.20.20.20.20.2\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.23.21.21.21.21.3\">.001)</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.24.22.22.22.22.4\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.24.22.22.22.22.4.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.25.23.23.23.23.5\"></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T2.26.24.24.24.24.6\"></span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.29.27.27.27.27\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.29.27.27.27.27.4\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.29.27.27.27.27.4.1\">\\Ac</span>isic (No Patch Data)</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.27.25.25.25.25.1\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.29.27.27.27.27.5\">n/a</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.28.26.26.26.26.2\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.28.26.26.26.26.2.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.29.27.27.27.27.6\">n/a</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.29.27.27.27.27.3\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.29.27.27.27.27.3.1\">\\B</span></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T2.29.27.27.27.27.7\">n/a</span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.35.33.33.33.33\">\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.35.33.33.33.33.7\">Oxford-IIIT-Pet</span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.30.28.28.28.28.1\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.30.28.28.28.28.1.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.31.29.29.29.29.2\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.31.29.29.29.29.2.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.32.30.30.30.30.3\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.33.31.31.31.31.4\"></span>\n<span class=\"ltx_td ltx_align_left\" id=\"S5.T2.34.32.32.32.32.5\"></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T2.35.33.33.33.33.6\"></span></span>\n<span class=\"ltx_tr\" id=\"S5.T2.41.39.39.39.39\">\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.41.39.39.39.39.7\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.41.39.39.39.39.7.1\">\\Ac</span>sbd</span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.36.34.34.34.34.1\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.37.35.35.35.35.2\"></span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.38.36.36.36.36.3\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.38.36.36.36.36.3.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.39.37.37.37.37.4\"><span class=\"ltx_ERROR undefined\" id=\"S5.T2.39.37.37.37.37.4.1\">\\B</span></span>\n<span class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.40.38.38.38.38.5\"></span>\n<span class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_bb\" id=\"S5.T2.41.39.39.39.39.6\"></span></span>\n</span>\n</span></span></p>\n</span></div>\n</figure>",
80
+ "capture": "Table 2: For dataset with more than two classes, we compute one-vs-rest to get the AUC scores. To compute overlap, we use input gradient for Decoy MNIST and Grad-CAM for the rest of the datasets. Some entries are missing standard deviation, since it is less than ."
81
+ }
82
+ },
83
+ "image_paths": {
84
+ "1": {
85
+ "figure_path": "2105.02653v3_figure_1.png",
86
+ "caption": "Figure 1: Method Overview. a) During training, a NN gets an input sample \ud835\uddb7i\u2208\u211d(w\u00d7h\u00d7c)subscript\ud835\uddb7\ud835\udc56superscript\u211d\ud835\udc64\u210e\ud835\udc50\\mathsf{X}_{i}\\in\\mathbb{R}^{(w\\times h\\times c)}sansserif_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 blackboard_R start_POSTSUPERSCRIPT ( italic_w \u00d7 italic_h \u00d7 italic_c ) end_POSTSUPERSCRIPT from the training dataset and tries to match the prediction y^isubscript^\ud835\udc66\ud835\udc56\\hat{y}_{i}over^ start_ARG italic_y end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT with the ground truth label yisubscript\ud835\udc66\ud835\udc56y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. Our method provides the NN with additional evidence in the form of explanation feedback \ud835\udc04i\u2208{0,1}(w\u00d7h)subscript\ud835\udc04\ud835\udc56superscript01\ud835\udc64\u210e\\mathbf{E}_{i}\\in\\{0,1\\}^{(w\\times h)}bold_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 { 0 , 1 } start_POSTSUPERSCRIPT ( italic_w \u00d7 italic_h ) end_POSTSUPERSCRIPT. A value of 1111 in \ud835\udc04isubscript\ud835\udc04\ud835\udc56\\mathbf{E}_{i}bold_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT indicates a region in the input space as irrelevant to the prediction, while 00 indicates that we do not have any concern. The explanation feedback is used to regularize the model\u2019s focus to give correct explanation and add domain knowledge. b) A new input sample \ud835\uddb7jsubscript\ud835\uddb7\ud835\udc57\\mathsf{X}_{j}sansserif_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT from the test dataset is fed to the model and an explanation is generated. Without explanation regularization, the NN uses the patch to make the prediction. With our method, the NN no longer looks at the patch in the image. The skin images are from the ISIC dataset (Codella et al., 2019; Tschandl et al., 2018; Rieger et al., 2020).",
87
+ "url": "http://arxiv.org/html/2105.02653v3/x1.png"
88
+ },
89
+ "2": {
90
+ "figure_path": "2105.02653v3_figure_2.png",
91
+ "caption": "Figure 2: Finding Activations. Given an explanation feedback \ud835\udc04i\u2208{0,1}(w\u00d7h)subscript\ud835\udc04\ud835\udc56superscript01\ud835\udc64\u210e\\mathbf{E}_{i}\\in\\{0,1\\}^{(w\\times h)}bold_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 { 0 , 1 } start_POSTSUPERSCRIPT ( italic_w \u00d7 italic_h ) end_POSTSUPERSCRIPT for the sample \ud835\uddb7i\u2208\u211d(w\u00d7h\u00d7c)subscript\ud835\uddb7\ud835\udc56superscript\u211d\ud835\udc64\u210e\ud835\udc50\\mathsf{X}_{i}\\in\\mathbb{R}^{(w\\times h\\times c)}sansserif_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 blackboard_R start_POSTSUPERSCRIPT ( italic_w \u00d7 italic_h \u00d7 italic_c ) end_POSTSUPERSCRIPT, we find activations to add the explanation feedback. A value of 1111 in \ud835\udc04isubscript\ud835\udc04\ud835\udc56\\mathbf{E}_{i}bold_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT indicates irrelevant regions in the input. A value of 00 denotes features that no preference is given. First, \ud835\udc04isubscript\ud835\udc04\ud835\udc56\\mathbf{E}_{i}bold_E start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is downsized to the size of feature maps of the last convolutional layer using the function f\u2062(\u22c5)\ud835\udc53\u22c5f(\\cdot)italic_f ( \u22c5 ). Afterward, since the height and widths are the same, we simply overlay the explanation feedback with the feature maps to find activations to target. Specifically, we inject this information via the likelihood term of Eq. 3. The skin image is from the ISIC dataset (Codella et al., 2019; Tschandl et al., 2018; Rieger et al., 2020).",
92
+ "url": "http://arxiv.org/html/2105.02653v3/x2.png"
93
+ },
94
+ "3(a)": {
95
+ "figure_path": "2105.02653v3_figure_3(a).png",
96
+ "caption": "(a) ISIC. Our method removes the focus on patches that the data augmentation approach is unable to.\nFigure 3: Examples of saliency maps on samples randomly drawn from the test dataset. More examples can be found in the link given on the first page.",
97
+ "url": "http://arxiv.org/html/2105.02653v3/x3.png"
98
+ },
99
+ "3(b)": {
100
+ "figure_path": "2105.02653v3_figure_3(b).png",
101
+ "caption": "(b) SBD. Our method makes the saliency maps more focused and concentrated.\nFigure 3: Examples of saliency maps on samples randomly drawn from the test dataset. More examples can be found in the link given on the first page.",
102
+ "url": "http://arxiv.org/html/2105.02653v3/x4.png"
103
+ },
104
+ "3(c)": {
105
+ "figure_path": "2105.02653v3_figure_3(c).png",
106
+ "caption": "(c) Decoy MNIST Color. Both our method and data augmentation can remove focus on decoys. Our method makes the saliency maps more focused.\nFigure 3: Examples of saliency maps on samples randomly drawn from the test dataset. More examples can be found in the link given on the first page.",
107
+ "url": "http://arxiv.org/html/2105.02653v3/x5.png"
108
+ },
109
+ "3(d)": {
110
+ "figure_path": "2105.02653v3_figure_3(d).png",
111
+ "caption": "(d) Oxford-IIIT-Pet. When no performance gain can be made, the saliency maps are similar.\nFigure 3: Examples of saliency maps on samples randomly drawn from the test dataset. More examples can be found in the link given on the first page.",
112
+ "url": "http://arxiv.org/html/2105.02653v3/x6.png"
113
+ }
114
+ },
115
+ "validation": true,
116
+ "references": [
117
+ {
118
+ "1": {
119
+ "title": "Deep Learning.",
120
+ "author": "Ian Goodfellow, Yoshua Bengio, and Aaron Courville.",
121
+ "venue": "MIT Press, 2016.",
122
+ "url": null
123
+ }
124
+ },
125
+ {
126
+ "2": {
127
+ "title": "A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play.",
128
+ "author": "David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis.",
129
+ "venue": "Science, 2018.",
130
+ "url": null
131
+ }
132
+ },
133
+ {
134
+ "3": {
135
+ "title": "A guide to deep learning in healthcare.",
136
+ "author": "Andre Esteva, Alexandre Robicquet, Bharath Ramsundar, Volodymyr Kuleshov, Mark DePristo, Katherine Chou, Claire Cui, Greg Corrado, Sebastian Thrun, and Jeff Dean.",
137
+ "venue": "Nature Medicine, 2019.",
138
+ "url": null
139
+ }
140
+ },
141
+ {
142
+ "4": {
143
+ "title": "Deep Reinforcement Learning for Autonomous Driving: A Survey.",
144
+ "author": "B. Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A. Al Sallab, Senthil Kumar Yogamani, and Patrick P\u00e9rez.",
145
+ "venue": "IEEE Trans. Intell. Transp. Syst., 2022.",
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "5": {
151
+ "title": "Intriguing properties of neural networks.",
152
+ "author": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus.",
153
+ "venue": "In Proc. of ICLR, 2014.",
154
+ "url": null
155
+ }
156
+ },
157
+ {
158
+ "6": {
159
+ "title": "Unmasking Clever Hans predictors and assessing what machines really learn.",
160
+ "author": "Sebastian Lapuschkin, Stephan W\u00e4ldchen, Alexander Binder, Gr\u00e9goire Montavon, Wojciech Samek, and Klaus-Robert M\u00fcller.",
161
+ "venue": "Nature Communications, 2019.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "7": {
167
+ "title": "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI.",
168
+ "author": "Alejandro Barredo Arrieta, Natalia D\u00edaz Rodr\u00edguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garc\u00eda, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera.",
169
+ "venue": "Inf. Fusion, 2020.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "8": {
175
+ "title": "Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations.",
176
+ "author": "Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez.",
177
+ "venue": "In Proc. of IJCAI, 2017.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "9": {
183
+ "title": "Weight Uncertainty in Neural Network.",
184
+ "author": "Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra.",
185
+ "venue": "In Proc. of ICML, 2015.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "10": {
191
+ "title": "Hands-On Bayesian Neural Networks - A Tutorial for Deep Learning Users.",
192
+ "author": "Laurent Valentin Jospin, Hamid Laga, Farid Boussa\u00efd, Wray L. Buntine, and Mohammed Bennamoun.",
193
+ "venue": "IEEE Comput. Intell. Mag., 2022.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "11": {
199
+ "title": "Understanding Deep Learning.",
200
+ "author": "Simon J.D. Prince.",
201
+ "venue": "The MIT Press, 2023.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "12": {
207
+ "title": "Explanatory Interactive Machine Learning.",
208
+ "author": "Stefano Teso and Kristian Kersting.",
209
+ "venue": "In Proc. of AIES, 2019.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "13": {
215
+ "title": "Interpretations are Useful: Penalizing Explanations to Align Neural Networks with Prior Knowledge.",
216
+ "author": "Laura Rieger, Chandan Singh, W. James Murdoch, and Bin Yu.",
217
+ "venue": "In Proc. of ICML, 2020.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "14": {
223
+ "title": "Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC).",
224
+ "author": "Noel C. F. Codella, Veronica Rotemberg, Philipp Tschandl, M. Emre Celebi, Stephen W. Dusza, David A. Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael A. Marchetti, Harald Kittler, and Allan Halpern.",
225
+ "venue": "CoRR, 2019.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "15": {
231
+ "title": "The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions.",
232
+ "author": "Philipp Tschandl, Cliff Rosendahl, and Harald Kittler.",
233
+ "venue": "Scientific Data, 2018.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "16": {
239
+ "title": "Probabilistic Machine Learning: Advanced Topics.",
240
+ "author": "Kevin P. Murphy.",
241
+ "venue": "MIT Press, 2023.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "17": {
247
+ "title": "Variational Dropout and the Local Reparameterization Trick.",
248
+ "author": "Diederik P. Kingma, Tim Salimans, and Max Welling.",
249
+ "venue": "In Proc. of NIPS, 2015.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "18": {
255
+ "title": "Variational Inference: A Review for Statisticians.",
256
+ "author": "David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe.",
257
+ "venue": "Journal of the American Statistical Association, 2017.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "19": {
263
+ "title": "Auto-Encoding Variational Bayes.",
264
+ "author": "Diederik P. Kingma and Max Welling.",
265
+ "venue": "In Proc. of ICLR, 2014.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "20": {
271
+ "title": "Variational Dropout Sparsifies Deep Neural Networks.",
272
+ "author": "Dmitry Molchanov, Arsenii Ashukha, and Dmitry P. Vetrov.",
273
+ "venue": "In Proc. of ICML, 2017.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "21": {
279
+ "title": "DARPA\u2019s Explainable Artificial Intelligence (XAI) Program.",
280
+ "author": "David Gunning and David W. Aha.",
281
+ "venue": "AI Mag., 2019.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "22": {
287
+ "title": "The mythos of model interpretability.",
288
+ "author": "Zachary C. Lipton.",
289
+ "venue": "Commun. ACM, 2018.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "23": {
295
+ "title": "Incorporating Priors with Feature Attribution on Text Classification.",
296
+ "author": "Frederick Liu and Besim Avci.",
297
+ "venue": "In Proc. of ACL, 2019.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "24": {
303
+ "title": "Learning Credible Deep Neural Networks with Rationale Regularization.",
304
+ "author": "Mengnan Du, Ninghao Liu, Fan Yang, and Xia Hu.",
305
+ "venue": "In Proc. of ICDM, 2019.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "25": {
311
+ "title": "Right for Better Reasons: Training Differentiable Models by Constraining their Influence Functions.",
312
+ "author": "Xiaoting Shao, Arseny Skryagin, Wolfgang Stammer, Patrick Schramowski, and Kristian Kersting.",
313
+ "venue": "In Proc. of AAAI, 2021.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "26": {
319
+ "title": "Improving performance of deep learning models with axiomatic attribution priors and expected gradients.",
320
+ "author": "Gabriel G. Erion, Joseph D. Janizek, Pascal Sturmfels, Scott M. Lundberg, and Su-In Lee.",
321
+ "venue": "Nat. Mach. Intell., 2021.",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "27": {
327
+ "title": "Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded.",
328
+ "author": "Ramprasaath Ramasamy Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry P. Heck, Dhruv Batra, and Devi Parikh.",
329
+ "venue": "In Proc. of ICCV, 2019.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "28": {
335
+ "title": "Making deep neural networks right for the right scientific reasons by interacting with their explanations.",
336
+ "author": "Patrick Schramowski, Wolfgang Stammer, Stefano Teso, Anna Brugger, Franziska Herbert, Xiaoting Shao, Hans-Georg Luigs, Anne-Katrin Mahlein, and Kristian Kersting.",
337
+ "venue": "Nat. Mach. Intell., 2020.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "29": {
343
+ "title": "PyTorch 2: Faster Machine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation.",
344
+ "author": "Jason Ansel, Edward Z. Yang, Horace He, Natalia Gimelshein, Animesh Jain, Michael Voznesensky, Bin Bao, Peter Bell, David Berard, Evgeni Burovski, Geeta Chauhan, Anjali Chourdia, Will Constable, Alban Desmaison, Zachary DeVito, Elias Ellison, Will Feng, Jiong Gong, Michael Gschwind, Brian Hirsh, Sherlock Huang, Kshiteej Kalambarkar, Laurent Kirsch, Michael Lazos, Mario Lezcano, Yanbo Liang, Jason Liang, Yinghai Lu, C. K. Luk, Bert Maher, Yunjie Pan, Christian Puhrsch, Matthias Reso, Mark Saroufim, Marcos Yukio Siraichi, Helen Suk, Shunting Zhang, Michael Suo, Phil Tillet, Xu Zhao, Eikan Wang, Keren Zhou, Richard Zou, Xiaodong Wang, Ajit Mathews, William Wen, Gregory Chanan, Peng Wu, and Soumith Chintala.",
345
+ "venue": "In Proc. of ASPLOS, 2024.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "30": {
351
+ "title": "MNIST handwritten digit database.",
352
+ "author": "Yann LeCun, Corinna Cortes, and CJ Burges.",
353
+ "venue": "ATT Labs, 2010.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "31": {
359
+ "title": "Cats and dogs.",
360
+ "author": "Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar.",
361
+ "venue": "In Proc. of CVPR, 2012.",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "32": {
367
+ "title": "Semantic contours from inverse detectors.",
368
+ "author": "Bharath Hariharan, Pablo Arbel\u00e1ez, Lubomir D. Bourdev, Subhransu Maji, and Jitendra Malik.",
369
+ "venue": "In Proc. of ICCV, 2011.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "33": {
375
+ "title": "The PASCAL Visual Object Classes Challenge 2011 (VOC2011) Results, 2011.",
376
+ "author": "M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.",
377
+ "venue": "URL http://host.robots.ox.ac.uk/pascal/VOC/.",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "34": {
383
+ "title": "Gradient-based learning applied to document recognition.",
384
+ "author": "Yann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner.",
385
+ "venue": "Proc. IEEE, 1998.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "35": {
391
+ "title": "ImageNet Classification with Deep Convolutional Neural Networks.",
392
+ "author": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton.",
393
+ "venue": "In Proc. of NIPS, 2012.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "36": {
399
+ "title": "PyTorch Lightning, August 2024.",
400
+ "author": "William Falcon and The PyTorch Lightning team.",
401
+ "venue": null,
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "37": {
407
+ "title": "Scikit-learn: Machine Learning in Python.",
408
+ "author": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake VanderPlas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay.",
409
+ "venue": "J. Mach. Learn. Res., 2011.",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "38": {
415
+ "title": "Captum: A unified and generic model interpretability library for PyTorch.",
416
+ "author": "Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson.",
417
+ "venue": "CoRR, 2020.",
418
+ "url": null
419
+ }
420
+ },
421
+ {
422
+ "39": {
423
+ "title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps.",
424
+ "author": "Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman.",
425
+ "venue": "In Proc. of ICLR Workshop Track Proceedings, 2014.",
426
+ "url": null
427
+ }
428
+ },
429
+ {
430
+ "40": {
431
+ "title": "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.",
432
+ "author": "Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra.",
433
+ "venue": "In Proc. of ICCV, 2017.",
434
+ "url": null
435
+ }
436
+ },
437
+ {
438
+ "41": {
439
+ "title": "A Latex style and template for paper preprints (based on NIPS style), 2020.",
440
+ "author": "George Kour.",
441
+ "venue": "URL https://github.com/kourgeorge/arxiv-style.",
442
+ "url": null
443
+ }
444
+ }
445
+ ],
446
+ "url": "http://arxiv.org/html/2105.02653v3"
447
+ }
20241127/2201.11192v2.json ADDED
@@ -0,0 +1,600 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "ReforesTree: A Dataset for Estimating Tropical Forest Carbon Stock with Deep Learning and Aerial Imagery",
3
+ "abstract": "Forest biomass is a key influence for future climate, and the world urgently needs highly scalable financing schemes, such as carbon offsetting certifications, to protect and restore forests. Current manual forest carbon stock inventory methods of measuring single trees by hand are time, labour, and cost intensive and have been shown to be subjective. They can lead to substantial overestimation of the carbon stock and ultimately distrust in forest financing. The potential for impact and scale of leveraging advancements in machine learning and remote sensing technologies is promising, but needs to be of high quality in order to replace the current forest stock protocols for certifications.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The degradation of the natural world is unprecedented in human history and a key driver of the climate crisis and the Holocene extinction (Ceballos and Ehrlich 2018 ###reference_b8###). Forests play a significant role in the planet\u2019s carbon cycle, directly impacting local and global climate through its biogeophysical effects and as carbon sinks, sequestering and storing carbon through photosynthesis (Griscom et al. 2017 ###reference_b16###).\nHowever, since the year 2000, we have lost 361 million ha of forest cover, equivalent to the size of Europe, mainly in tropical areas (Hansen et al. 2013 ###reference_b18###). This accounts for 18% of global anthropogenic emissions and contributes to driving up atmospherical carbon levels (IPCC 2019 ###reference_b21###). Forests, especially tropical forests, also provide habitats for 80% of land-based biodiversity and with the increasing risk and frequency of wildfires, droughts, and extreme weather, forest ecosystems are under severe pressure (Shi et al. 2021 ###reference_b42###).\nTo avoid planetary tipping points (Rockst\u00f6m et al. 2009 ###reference_b36###) and maintain a stable and livable climate, mankind urgently need to reduce carbon emissions until 2050 and restore essential ecosystems (IPCC 2021 ###reference_b22###). Forests and natural carbon sequestration are important climate change mitigation strategies (Canadell and Raupach 2008 ###reference_b7###) with a biophysical mitigation potential of 5,380 MtCO2 per year on average until 2050 (IPCC 2019 ###reference_b21###).\nForestry is a large industry and the causes of deforestation are mostly economically driven (FAO 2020 ###reference_b12###) (Geist and Lambin 2001 ###reference_b14###). For the last 20 years, major conservation efforts have been underway to mitigate and safeguard against these losses. One of the global financing strategies is carbon offsets (Blaufelder et al. 2021 ###reference_b5###). Initially, it started as the Clean Development Mechanism (CDM) under the Kyoto Protocol, allowing governments and business organizations from industrialized countries to invest in forestry in developing countries by buying carbon credits to offset industrialized emissions (FAO 2020 ###reference_b12###) Several other independent bodies have later developed official standards for verifying and certifying carbon offsetting projects, such as the Gold Standard (GS) and the Verified Carbon Standard (VERRA). The certification process for forest carbon offsetting projects is capital and labour intensive, especially due to the high cost of manual monitoring, verification and reporting (MVR) of the forest carbon stock.\nThe carbon offsetting market is rapidly increasing and expected to grow by a factor of 100 until 2050 due to high demand and available capital (Blaufelder et al. 2021 ###reference_b5###). However, the main obstacle is limited supply of offsetting projects as forest owners lack upfront capital and market access (Kreibich and Hermwille 2021 ###reference_b24###).\nRecent research investigations (Badgley et al. 2021 ###reference_b1###; West et al. 2020 ###reference_b50###) have shown that the current manual forest carbon stock practices systematically overestimate forestry carbon offsetting projects with up to 29% of the offsets analyzed, totaling up to 30 million tCO2e (CO2 equivalents) and worth approximately $410 million. The overestimation was identified to come from subjective estimations and modeling of the carbon stock baseline and of the project\u2019s additionally and leakage reporting. There is thus a need for higher quality carbon offsetting protocols and higher transparency and accountability of the MVR of these projects (Haya et al. 2020 ###reference_b19###).\nThere are three key aspects that are important for the use of remote sensing in MVR of forest carbon stock. One aspect is financial; using available and accessible technology and sensors to lower the cost and upfront capital requirements for forest owners to get certified, especially in low and middle-income countries. The second aspect is reducing subjectivity in estimating carbon stock and increasing trustworthiness and transparency in the carbon offsetting certification protocols. And lastly, the solutions need to be scalable due to the urgency of financing forest restoration, especially in tropical regions.\nVarious verification bodies, new ventures, and academia are currently developing remote sensing technologies to automate parts of the certification process of forestry carbon offsetting projects (Narine, Popescu, and Malambo 2020 ###reference_b31###; Dao et al. 2019 ###reference_b10###). Satellite imagery is increasing in quality and availability and, combined with state-of-the-art deep learning and lidar, promises to soon map every tree on earth (Hanan and Anchang 2020 ###reference_b17###) and to enable forest aboveground biomass and carbon to be estimated at scale (Saatchi et al. 2011 ###reference_b37###; Santoro et al. 2021 ###reference_b39###). Compared to current manual estimates, these advancements reduce time and cost and increase transparency and accountability, thus lowering the threshold for forest owners and buyers to enter the market (L\u00fctjens, Liebenwein, and Kramer 2019 ###reference_b25###). Nevertheless, these algorithms risk additionally contributing to the systematic overestimation of carbon stocks, not reducing it, and are not applicable for small-scale forests, below 10,000 ha (White et al. 2018 ###reference_b51###), (Global Forest Watch 2019 ###reference_b15###).\nAccurately estimating forest carbon stock, especially for small scale carbon offset projects, presents several interesting machine learning challenges, such as high variance of species and occlusion of individual tree crowns. There are many promising approaches, such as hyperspectral species classification (Schiefer et al. 2020 ###reference_b40###), lidar-based height measurements (Ganz, K\u00e4ber, and Adler 2019 ###reference_b13###) and individual tree crown segmentation across sites (Weinstein et al. 2020b ###reference_b49###). However, these applications have been developed mainly on datasets from temperate forests and, to the knowledge of the authors, there is no publicly available dataset of tropical forests with both aerial imagery and ground truth field measurements.\n\n###figure_1### Here, we present ReforesTree, a dataset of six tropical agroforestry reforestation project sites with individual tree crown bounding boxes of over 4,600 trees matched with their respective diameter at breast height (DBH), species, species group, aboveground biomass (AGB), and carbon stock. This dataset represents ground truth field data mapped with low-cost, high-resolution RGB drone imagery to be used to train new models for carbon offsetting protocols and for benchmark existing models.\nTo summarize, with ReforestTree, we contribute the following: 1) the first publicly available dataset of tropical agro-forestry containing both ground truth field data matched with high resolution RGB drone imagery at the individual tree level and 2) a methodology for reducing the current overestimation of forest carbon stock through deep learning and aerial imagery for carbon offsetting projects."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Deep Learning for Remote Sensing",
21
+ "text": "In recent years, deep learning (DL), and especially deep convolutional neural networks (CNN) are increasing in popularity for image analysis in the remote-sensing community (Ma et al. 2019 ###reference_b27###), (Zhu et al. 2017 ###reference_b55###). With the increase in computation power, larger datasets, transfer learning, and breakthroughs in network architecture, DL models have outperformed conventional image processing methods in several image tasks such as land use and land cover (LULC) classification, segmentation and detection. Examples of deep supervised learning in remote sensing are the prediction of wildfires (Yang, Lupascu, and Meel 2021 ###reference_b52###), detection of invasive species (Bjorck et al. 2021 ###reference_b4###). CNNs offer feature extraction capabilities in recognizing patterns in both spatial and temporal data, even with low resolution inputs. With recent advances in meta and few shot learning these models can be trained and generalized on larger datasets and fine-tuned for local variance."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Manual Forest Inventory",
27
+ "text": "The standardized forest carbon stock inventory consists of manually measuring and registering sample trees of a project site. Tree metrics such as diameter at breast height (DBH), height, and species are then put through scientifically developed regression models called allometric equations to calculate the aboveground biomass (AGB) as seen in Figure 2 ###reference_###. The total biomass of a forest is the total AGB added with the below-ground biomass (BGB), calculated using a root-to-shoot ratio specific to the forest type and region (Ma et al. 2021 ###reference_b26###).\n\n###figure_2### The procedure how to calculate the correct amount of carbon offsets (CO2e) to be certified for a project is standardized through (Pearson, Walker, and Brown 2005 ###reference_b34###) as shown in Figure 2 ###reference_###. The (CO2e), also known as the baseline forest carbon stock, is equivalent of the total biomass divided by two. Despite being prone to error propagation (Petrokofsky et al. 2012 ###reference_b35###; Malhi et al. 2004 ###reference_b28###) and shown to systematically overestimate carbon stock (Badgley et al. 2021 ###reference_b1###), this is currently the standardized forest inventory method for certification of forestry projects."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Related Methods and Models",
33
+ "text": "The following are three types of methods to estimate forest carbon stock remotely, adapted from (Sun and Liu 2019 ###reference_b44###); 1) inventory-based models, based on national and regional forest inventories and regression models, are known to overestimate due to over-representations of dense commercial forests in the data, (Global Forest Watch 2019 ###reference_b15###). 2) Satellite-based models leveraging datasets from optical remote sensing, synthetic aperture radar satellites (SAR), and lidar (LiDAR) to create global aboveground biomass and carbon maps (Santoro et al. 2021 ###reference_b39###; Saatchi et al. 2011 ###reference_b37###; Spawn, Sullivan, and Lark 2020 ###reference_b43###). 3) Ecosystem-based models using topography, elevation, slope, aspect, and other environmental factors to construct statistical models and quantitatively describe the process of forest carbon cycle to estimate forest carbon stock(Ma et al. 2021 ###reference_b26###).\nThe most scalable and affordable of these methods are, evidently, satellites-based models. Nevertheless, these models and global maps are yet to estimate carbon stock at local scale and provide accurate estimates of highly heterogeneous and dense forest areas due to their low resolution of 30-300m (Bagheri, Shataee, and Erfanifard 2021 ###reference_b2###). An individual tree-based model that takes the individual overstory trees into account can provide this accuracy, especially if fused with geostatistical and satellite data.\nIn recent years, researchers have achieved high accuracy for standard forestry inventory tasks such as individual tree crown detection (Weinstein et al. 2019 ###reference_b48###), lidar-based height estimation (Ganz, K\u00e4ber, and Adler 2019 ###reference_b13###), and species classification (Miyoshi et al. 2020 ###reference_b29###; Schiefer et al. 2020 ###reference_b40###; M\u00e4yr\u00e4 et al. 2021 ###reference_b30###), using deep learning models and aerial imagery. This shows high potential for combining high-resolution imagery with deep learning models as a method for accurate carbon stock estimation for small-scale reforestation projects (Sun and Liu 2019 ###reference_b44###).\nAs most tropical forests are situated in low to middle income countries, without access to hyperspectral, lidar and other more advanced sensors, the models need to be developed using available technologies. A trade-off for accuracy and data availability is basic high-resolution RGB drone imagery. Drone imagery (1-3cm/px resolution), combined with CNN, has previously been used to directly estimate biomass and carbon stock in individual mangrove trees (Jones et al. 2020 ###reference_b23###) or indirectly by detecting species or tree metrics such as DBH or height (N\u00e5f\u00e4lt 2018 ###reference_b32###; Omasa et al. 2003 ###reference_b33###), achieving an accuracy similar to manual field measurements. And by leveraging multi-fusion approaches (Du and Zare 2020 ###reference_b11###; Zhang 2010 ###reference_b54###), e.g. combining low-resolution satellite, high-resolution drone imagery, and field measurements and contextual ecological or topological data, and multi-task learning (Crawshaw 2020 ###reference_b9###), e.g. tree metrics and carbon storage factors as auxiliary tasks, these models can replace and scale the existing manual forest inventory.\nThere are several datasets for tree detection and classification from drone imagery such as the NEON dataset (Weinstein et al. 2020a ###reference_b47###), or the Swedish Forest Agency mainly from temperate forests from the US or Europe. To our knowledge, there are no publicly available datasets including both field measurements and drone imagery of heterogeneous tropical forests."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Dataset and Method",
39
+ "text": "The ReforesTree dataset consists of six agro-forestry sites in the central coastal region of Ecuador. The sites are of dry tropical forest type and eligible for carbon offsetting certification with forest inventory done and drone imagery captured in 2020. See Table 1 for information on each site."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Forest Inventory Data and Drone Imagery",
45
+ "text": "Field measurements were done by hand for all live trees and bushes within the site boundaries and include GPS location, species, and diameter at breast height (DBH) per tree. Drone imagery was captured in 2020 by an RGB camera from a Mavic 2 Pro drone with a resolution of 2cm per pixel. Each site is around 0.5 ha, mainly containing banana trees (Musaceae) and cacao plants (Cacao), planted in 2016-2019.\nThe aboveground biomass (AGB) is calculated using published allometric equations for tropical agro-forestry, namely Eq.1 ###reference_### for fruit trees, including citrus fruits (Segura, Kanninen, and Su\u00e1rez 2006 ###reference_b41###), Eq.2 ###reference_### banana trees (Van Noordwijk et al. 2002 ###reference_b45###), Eq.3 ###reference_### for cacao (Yuliasmara, Wibawa, and Prawoto 2009 ###reference_b53###), and Eq.4 ###reference_### for shade trees (timber) (Brown and Iverson 1992 ###reference_b6###). These are commonly used in global certification standards. The carbon stock is calculated through the standard forest inventory methodology using a root-to-shoot ratio of 22%, which is standard for dry tropical reforestation sites (Ma et al. 2019 ###reference_b27###)."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Data Processing and Method",
51
+ "text": "The raw data is processed in several steps as seen in Figure 3. The goal of this process is to have a machine learning ready dataset that consists of matched drone image of an individual tree with the trees labels, such as AGB value. All the drone images have been cropped to fit tightly the boundaries of the field measured areas. The details of this cropping process, and the code repository, are in the Appendix.\n\n###figure_3### Initially the RGB orthomosaics are cut into 40004000 tiles and sent through DeepForest, a python package for predicting individual tree crowns from RGB imagery (Weinstein et al. 2019 ###reference_b48###), fine-tuned on some manually labelled bounding boxes from the sites. Afterwards, the bounding boxes containing more than 80% white were filtered out, e.g. bounding boxes lying on the border of the drone imagery, and manually labeled to banana and non-banana, due to the easily recognizable characteristics of banana trees, resulting in clear bounding boxes of all trees as shown in Figure 4 ###reference_###.\n\n###figure_4### To fuse the tree information extracted from the ground measurements with the bounding boxes of the trees detected, we used OneForest, a recent machine learning approach for fusing citizen data with drone imagery. To remove noise introduced in both GPS locations, OneForest uses a greedy optimal transport algorithm. This is a known coupling method to map between two GPS positions (center of bounding box from drone imagery and GPS location of tree from field data). Developed by Villani (Villani 2003 ###reference_b46###), the methods finds the minimal distance between two distributions via a convex linear program optimizing for a matching that moves the mass from one distribution to the other with minimal cost. The cost is usually defined as the euclidean distance or the Kulback-Leibler divergence between the distributions. The optimum, i.e. the minimal distance between the two distributions, is called the Wasserstein metric."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Baseline CNN Model",
57
+ "text": "With a dataset of matched bounding boxes and tree labels, we fine-tuned a basic pre-trained CNN, ResNet18 (He et al. 2015 ###reference_b20###) with a mean-square-error loss to estimate individual tree AGB. The results were satisfying despite the simple baseline model, and proves that the individual tree estimation from drone imagery has potential.\nFourteen images were identified as being larger than the expected crown size of a tree, and they were center cropped at 800800. To preserve the crown size information, the smaller images were zero-padded up to 800800, before all images were resized to fit the network architecture.\nThe dataset has is unbalanced with regards to species, of which 43% is cacao and 32% is banana. Additionally, due to the trees being planted between 2016-2019, many of the trees have similar size (e.g. DBH) and half of the trees have DBH between 7-10cm. The training dataset consisted of equal number of samples of species and DBH, and from the different project sites."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Experiments",
63
+ "text": "With the emerging new biomass maps and forest stock estimation models, we used the ReforesTree dataset to benchmark these maps and compare with our baseline CNN model for AGB estimation. We compared the maps taken from (Global Forest Watch 2019 ###reference_b15###), (Spawn, Sullivan, and Lark 2020 ###reference_b43###), and (Santoro et al. 2021 ###reference_b39###). The Global Forest Watch\u2019s Above-Ground Woody Biomass dataset is a global map of AGB and carbon density at 30m30m resolution for the year 2000. It is based on more than 700,000 quality-filtered Geoscience Laser Altimeter System (GLAS) lidar observations using machine learning models based on allometric equations for the different regions and vegetation types. The second dataset from (Spawn, Sullivan, and Lark 2020 ###reference_b43###) is a 300m300m harmonized map based on overlayed input maps. The input maps were allocated in proportion to the relative spatial extent of each vegetation type using ancillary maps of tree cover and landcover, and a rule-based decision schema. The last, and most recent 100m100m dataset from (Santoro et al. 2021 ###reference_b39###) is obtained by spaceborne SAR (ALOS PALSAR, Envisat ASAR), optical (Landsat-7), lidar (ICESAT) and auxiliary datasets with multiple estimation procedures with a set of biomass expansion and conversion factors following approaches to extend ground estimates of wood density and stem-to-total biomass expansion factors.\nAs seen in Table 2, all of the available global AGB maps have a tendency to overestimate the ground truth measurements up to a factor of ten. These are not encouraging results showing that these maps are far from being accurate enough to be used in remote sensing of forest carbon stock at a small scale, as is the case for the ReforesTree dataset.\nOur baseline model, on the other hand, has a slight tendency of underestimating the biomass. The model has an evident advantage, to be trained on the dataset, but these initial results show promise for the individual tree estimation approach using drone imagery for forest carbon inventory."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusions and Future Work",
69
+ "text": "We introduce the ReforesTree dataset in hopes of encouraging the fellow machine learning community to take on the challenge of developing low-cost, scalable, trustworthy and accurate solutions for monitoring, verification and reporting of tropical reforestation inventory. We also present an outlined methodology for creating an annotated machine learning dataset from field data and drone imagery, and train a baseline CNN model for individual tree aboveground biomass estimation. This methodology includes a data processing pipeline leveraging a fine-tuned tree crown detection algorithm and an optimal transport matching algorithm for reduction of GPS noise.\nThe ReforesTree dataset of field measurements and low-cost, high-resolution RGB drone imagery represents the trade-off for accuracy and data availability of remote sensing of forest carbon stock in tropical regions. It can be used to train new or benchmark existing models for MVR of carbon offsetting reforestation protocols. Remote inventory of small scale tropical reforestation projects comes with several ecological challenges, such high biodiversity, level of canopy closure, and topology. This dataset is a start to develop a generalized model that can be fine-tuned on local scale. Future work will investigate ways to improve the methodology and reduce error in the machine learning ready dataset, and increase the explainability to have a trustworthy and transparent model. Additionally, we see further potential in fusing satellite and other available geoecological data layers as well as leveraging the multiple labels available (e.g. DBH, species) as auxiliary tasks in a multitask learning problem.\nAs the world is rapidly approaching planetary doom, we need to collaborate across disciplines to implement and scale the climate mitigation strategies available. Restoration of forests is one of our most important climate mitigation strategies. And by reducing the overestimation of carbon offsets, we can allow every man on earth who owns a tree to participate in climate action. Biodiverse and sustainable forestry can provide hope not only the for the machine learning community, but also beyond."
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "Acknowledgments",
75
+ "text": "The authors are thankful for the guidance and advice by our academic collaborator (Prof. Dava Newman, Prof. Lynn H Kaack, Prof. Thomas Crowther and the CrowtherLab), non-governmental institutions (BrainForest, WWF Switzerland, Restor), Isabel Hillman, Simeon Max, Microsoft AI for Earth, and support from the local community in Ecuador.\nLastly, we extend our sincere gratitude to Autumn Nguyen and Sulagna Saha for their significant contributions to this work. Their thorough review process led to substantial improvements in both the manuscript and the underlying codebase. Their detailed technical analysis and implementations have enhanced the robustness and reliability of our research. A comprehensive report of their contributions can be found in our technical documentation: https://gainforest.substack.com/p/improving-reforestree-correcting ###reference_g-reforestree-correcting###."
76
+ },
77
+ {
78
+ "section_id": "7",
79
+ "parent_section_id": null,
80
+ "section_name": "Technical Appendix",
81
+ "text": ""
82
+ },
83
+ {
84
+ "section_id": "7.1",
85
+ "parent_section_id": "7",
86
+ "section_name": "data cleaning",
87
+ "text": "All 28 species were divided into 6 species family groups: banana, cacao, fruit, timber, citrus and other.\nThe field data was manually collected as a standard manual forest inventory, potentially leading to human errors, missing values and outliers.\nThe dataset needed to reflect the ground truth. Therefore it was important not to remove trees from the dataset unnecessarily. All missing DBH values were given a value based on the average DBH of the same species for the year it was planted. Of the 28 species, only 3 species (in total 25 trees) were missing DBH values: 23 lemon (citrus), one balsa (timber), one bariable (other) trees. These were given DBH values interpolated from the other trees in the same family group and which were planted the same year.\nAdditionally, 8 banana trees that had DBH values larger than 50cm, which is unrealistically high. Assuming that there was a manual entry mistake, these values were exchanged with the maximum value of the banana trees for the year planted.\n\n###figure_5###"
88
+ },
89
+ {
90
+ "section_id": "7.2",
91
+ "parent_section_id": "7",
92
+ "section_name": "Aligning Drone Images with Field Boundaries",
93
+ "text": "A key issue identified in the ReforesTree pipeline was the mismatch between the drone imagery boundaries and the field data boundaries. To address this, we implemented the following steps to align the drone images with the field measurements for the six agroforestry sites. The code for this is in this reforestree-correction repository ###reference_correction/tree/main###.\nGeoDataframe Creation: We converted the field data, which included the longitude and latitude of each point, into a GeoDataFrame using the geopandas library. This allowed us to create point geometries that were easy to visualize and manipulate. The field data points, visualized as red dots in Figure 6 ###reference_###, served as the starting reference.\nBoundary Extraction using Alpha Shape: To capture the boundary of the field data, we used the alphashape library to create a convex hull around the points. By choosing an alpha value of 15000, similar to the value used by (Barenne et al. 2022 ###reference_b3###), we generated a tight boundary around the field data points.\nOverlay and Crop Drone Imagery: Using the rasterio library, we overlapped the generated alphashape boundary onto the drone imagery (in TIFF format). We then cropped the unnecessary parts of the image, outside the boundary, replacing them with white pixels. This step is illustrated by the transition from the third to fourth images in Figure 6 ###reference_###.\nAdjusting Image Boundaries: Finally, after cropping, we identified the bounds of the non-white pixels in the images and adjusted them to ensure they fit a square shape correctly. This was essential for integrating the images into the AGBench library. The final result can be seen in the transition from the fourth to the last image in Figure 6 ###reference_###.\n\n###figure_6###"
94
+ },
95
+ {
96
+ "section_id": "7.3",
97
+ "parent_section_id": "7",
98
+ "section_name": "Benchmark of satellite-based AGB maps",
99
+ "text": "To benchmark the low resolution (LR) satellite-based maps, we fitted it to the high resolution (HR) drone imagery overlapping the GPS coordinates.\n\n###figure_7### The calculation of the total AGB was done in five steps, illustrated in Figure 7 ###reference_###\ncropping the LR satellite map with a padding around the polygon of the site to reduce computation intensity (Satellite Raw)\nlinearly interpolating the values for this map and resize the map with the same HR pixel resolution as the drone imagery (Satellite Interpolated)\ncropping the map further fitting with the GPS locations (max/min) of the drone imagery\nfiltering out the site area by removing all pixels in the satellite-based map, that are outside of the drone imagery, coloured white (Satellite Filtered)\nlastly, multiplying the AGB mean density of the filtered map with the project site area to get the total AGB\nWe analysed the following three maps:\n(Global Forest Watch 2019 ###reference_b15###): Aboveground Woodly Biomass with 30x30m resolution for the year of 2000.\n(Spawn, Sullivan, and Lark 2020 ###reference_b43###): Global Aboveground and Belowground Biomass Carbon Density Maps for the Year 2010 with 300x300m resolution.\n(Santoro 2018 ###reference_b38###): GlobBiomass - Global Datasets of Forest Biomass with 100x100m resolution for the year 2010."
100
+ },
101
+ {
102
+ "section_id": "7.4",
103
+ "parent_section_id": "7",
104
+ "section_name": "Baseline CNN",
105
+ "text": "We trained the model on a single GPU of the type GeForce RTX 3090. The learning rate used was 1e-3, batch size of 64 for 30 epochs achieving a root square mean loss (RMSE) of 0,1."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {
110
+ "1": {
111
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.1.1.1.1\">Site</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.1.1.2.1\">No. of</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.1.1.3.1\">No. of</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T1.1.1.1.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.1.1.4.1\">Site</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T1.1.1.1.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.1.1.5.1\">total</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T1.1.1.1.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.1.1.6.1\">total</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx3.T1.1.2.2.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.2.2.1.1\">no.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx3.T1.1.2.2.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.2.2.2.1\">Trees</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx3.T1.1.2.2.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.2.2.3.1\">Species</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx3.T1.1.2.2.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.2.2.4.1\">Area</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx3.T1.1.2.2.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.2.2.5.1\">AGB</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx3.T1.1.2.2.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.2.2.6.1\">CO2e</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.3.1.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.3.1.1.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.3.1.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.3.1.2.1\">743</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.3.1.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.3.1.3.1\">18</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.3.1.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.3.1.4.1\">0.51</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.3.1.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.3.1.5.1\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.3.1.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.3.1.6.1\">5</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.4.2.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.4.2.1.1\">2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.4.2.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.4.2.2.1\">929</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.4.2.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.4.2.3.1\">22</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.4.2.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.4.2.4.1\">0.62</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.4.2.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.4.2.5.1\">15</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.4.2.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.4.2.6.1\">9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.5.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.5.3.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.5.3.1.1\">3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.5.3.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.5.3.2.1\">789</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.5.3.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.5.3.3.1\">20</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.5.3.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.5.3.4.1\">0.48</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.5.3.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.5.3.5.1\">10</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.5.3.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.5.3.6.1\">6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.6.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.6.4.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.6.4.1.1\">4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.6.4.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.6.4.2.1\">484</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.6.4.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.6.4.3.1\">12</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.6.4.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.6.4.4.1\">0.47</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.6.4.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.6.4.5.1\">5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.6.4.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.6.4.6.1\">3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.7.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.7.5.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.7.5.1.1\">5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.7.5.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.7.5.2.1\">872</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.7.5.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.7.5.3.1\">14</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.7.5.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.7.5.4.1\">0.56</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.7.5.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.7.5.5.1\">15</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.7.5.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.7.5.6.1\">9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.8.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.8.6.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.8.6.1.1\">6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.8.6.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.8.6.2.1\">846</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.8.6.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.8.6.3.1\">16</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.8.6.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.8.6.4.1\">0.53</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.8.6.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.8.6.5.1\">12</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.8.6.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.8.6.6.1\">7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.9.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx3.T1.1.9.7.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.9.7.1.1\">total</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx3.T1.1.9.7.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.9.7.2.1\">4463</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx3.T1.1.9.7.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.9.7.3.1\">28</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx3.T1.1.9.7.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.9.7.4.1\">3.17</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx3.T1.1.9.7.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.9.7.5.1\">66</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx3.T1.1.9.7.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx3.T1.1.9.7.6.1\">40</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Overview of the six project sites in Ecuador, as gathered in field measurements. Aboveground biomass (AGB) is measured in metric tons and area in hectares.</figcaption>\n</figure>",
112
+ "capture": "Table 1: Overview of the six project sites in Ecuador, as gathered in field measurements. Aboveground biomass (AGB) is measured in metric tons and area in hectares."
113
+ },
114
+ "2": {
115
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T2.1.1.1.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.1.1.1.1\">Site</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T2.1.1.1.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.1.1.2.1\">Field</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T2.1.1.1.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.1.1.3.1\">GFW</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T2.1.1.1.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.1.1.4.1\">Spawn</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T2.1.1.1.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.1.1.5.1\">Santoro</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T2.1.1.1.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.1.1.6.1\">Baseline</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx4.T2.1.2.2.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.2.2.1.1\">no.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx4.T2.1.2.2.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.2.2.2.1\">Data</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx4.T2.1.2.2.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.2.2.3.1\">2019</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx4.T2.1.2.2.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.2.2.4.1\">2020</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx4.T2.1.2.2.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.2.2.5.1\">2021</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Sx4.T2.1.2.2.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.2.2.6.1\">(Ours)</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.3.1.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.3.1.1.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.3.1.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.3.1.2.1\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.3.1.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.3.1.3.1\">99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.3.1.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.3.1.4.1\">97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.3.1.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.3.1.5.1\">36</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.3.1.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.3.1.6.1\">7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.4.2.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.4.2.1.1\">2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.4.2.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.4.2.2.1\">15</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.4.2.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.4.2.3.1\">108</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.4.2.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.4.2.4.1\">130</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.4.2.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.4.2.5.1\">42</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.4.2.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.4.2.6.1\">8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.5.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.5.3.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.5.3.1.1\">3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.5.3.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.5.3.2.1\">10</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.5.3.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.5.3.3.1\">36</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.5.3.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.5.3.4.1\">206</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.5.3.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.5.3.5.1\">15</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.5.3.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.5.3.6.1\">15</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.6.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.6.4.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.6.4.1.1\">4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.6.4.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.6.4.2.1\">5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.6.4.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.6.4.3.1\">5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.6.4.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.6.4.4.1\">102</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.6.4.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.6.4.5.1\">32</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.6.4.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.6.4.6.1\">9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.7.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.7.5.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.7.5.1.1\">5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.7.5.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.7.5.2.1\">15</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.7.5.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.7.5.3.1\">73</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.7.5.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.7.5.4.1\">352</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.7.5.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.7.5.5.1\">12</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.7.5.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.7.5.6.1\">11</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.8.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.8.6.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.8.6.1.1\">6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.8.6.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.8.6.2.1\">12</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.8.6.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.8.6.3.1\">26</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.8.6.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.8.6.4.1\">91</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.8.6.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.8.6.5.1\">72</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.1.8.6.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.8.6.6.1\">15</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.9.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.9.7.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.9.7.1.1\">tot.</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.9.7.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.9.7.2.1\">66</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.9.7.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.9.7.3.1\">331</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.9.7.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.9.7.4.1\">413</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.9.7.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.9.7.5.1\">89</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.9.7.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"Sx4.T2.1.9.7.6.1\">65</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The benchmark results from comparing different models for estimating AGB with the forest inventory of the ReforesTree sites. All numbers are given as AGB in Mg. GFW is <cite class=\"ltx_cite ltx_citemacro_citep\">(Global Forest Watch <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2201.11192v2#bib.bib15\" title=\"\">2019</a>)</cite>, Spawn is <cite class=\"ltx_cite ltx_citemacro_citep\">(Spawn, Sullivan, and Lark <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2201.11192v2#bib.bib43\" title=\"\">2020</a>)</cite>, Santoro is <cite class=\"ltx_cite ltx_citemacro_citep\">(Santoro et\u00a0al. <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2201.11192v2#bib.bib39\" title=\"\">2021</a>)</cite>. All of these three are satellite-based. Lastly, the baseline CNN is our drone-based model.</figcaption>\n</figure>",
116
+ "capture": "Table 2: The benchmark results from comparing different models for estimating AGB with the forest inventory of the ReforesTree sites. All numbers are given as AGB in Mg. GFW is (Global Forest Watch 2019), Spawn is (Spawn, Sullivan, and Lark 2020), Santoro is (Santoro et\u00a0al. 2021). All of these three are satellite-based. Lastly, the baseline CNN is our drone-based model."
117
+ }
118
+ },
119
+ "image_paths": {
120
+ "1": {
121
+ "figure_path": "2201.11192v2_figure_1.png",
122
+ "caption": "Figure 1: Drone imagery of each site of the ReforesTree dataset with a resolution of 2cm/px. The red dots are the locations of the trees measured in field surveys, plotted to make clear that the coverage of drone images were larger than the field measured area.",
123
+ "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/All_sites.png"
124
+ },
125
+ "2": {
126
+ "figure_path": "2201.11192v2_figure_2.png",
127
+ "caption": "Figure 2: The standard procedure for calculating the correct amount of carbon offsets to be certified for a reforestation project. The tree metrics are collected from manual forest inventory.",
128
+ "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/carbon_stock.png"
129
+ },
130
+ "3": {
131
+ "figure_path": "2201.11192v2_figure_3.png",
132
+ "caption": "Figure 3: The raw data and data processing pipeline for the ReforesTree dataset, resulting in labels matched to bounding boxes per tree.",
133
+ "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/data_process.png"
134
+ },
135
+ "4": {
136
+ "figure_path": "2201.11192v2_figure_4.png",
137
+ "caption": "Figure 4: Bounding box annotations per tree, as a result of fine-tuned DeepForest tree crown detection and manual cleaning. Red boxes represent banana trees and blue boxes represent other species.",
138
+ "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/bboxes.png"
139
+ },
140
+ "5": {
141
+ "figure_path": "2201.11192v2_figure_5.png",
142
+ "caption": "Figure 5: This figure represents the count of species family groups for each of the sites. All sites have trees of all species family groups, but cacao and banana are over represented.",
143
+ "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/group_site.png"
144
+ },
145
+ "6": {
146
+ "figure_path": "2201.11192v2_figure_6.png",
147
+ "caption": "Figure 6: This figure shows the alignment process between the drone images and field boundaries. The field data points (red dots) were used to create the alphashape, which was overlaid onto the drone imagery to crop unnecessary areas and ensure accurate alignment.",
148
+ "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/drone_field_alignment.png"
149
+ },
150
+ "7": {
151
+ "figure_path": "2201.11192v2_figure_7.png",
152
+ "caption": "Figure 7: This figure represents the different steps in the benchmark analysis and how we calculated the total AGB amount from the satellite-based maps for the ReforesTree sites. This is taken from site no. 0. The values represented in the image is AGB density.",
153
+ "url": "http://arxiv.org/html/2201.11192v2/extracted/6029529/figures/satellite_benchmark.png"
154
+ }
155
+ },
156
+ "validation": true,
157
+ "references": [
158
+ {
159
+ "1": {
160
+ "title": "Systematic over-crediting in California\u2019s forest carbon offsets program.",
161
+ "author": "Badgley, G.; Freeman, J.; Hamman, J. J.; Haya, B.; Trugman, A. T.; Anderegg, W. R.; and Cullenward, D. 2021.",
162
+ "venue": "bioRxiv.",
163
+ "url": null
164
+ }
165
+ },
166
+ {
167
+ "2": {
168
+ "title": "Canopy Based Aboveground Biomass and Carbon Stock Estimation of Wild Pistachio Trees in Arid Woodlands Using GeoEye-1 Images.",
169
+ "author": "Bagheri, R.; Shataee, S.; and Erfanifard, S. Y. a. 2021.",
170
+ "venue": "Journal of Agricultural Science and Technology, 23(1).",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "3": {
176
+ "title": "Tropical Forest Carbon Stock Estimation using RGB Drone Imagery.",
177
+ "author": "Barenne, V.; Bohl, J. P.; Dekas, D.; and Engelmann, T. 2022.",
178
+ "venue": null,
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "4": {
184
+ "title": "Accelerating Ecological Sciences from Above: Spatial Contrastive Learning for Remote Sensing.",
185
+ "author": "Bjorck, J.; Rappazzo, B. H.; Shi, Q.; Brown-Lima, C.; Dean, J.; Fuller, A.; and Gomes, C. 2021.",
186
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence, 35(17): 14711\u201314720.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "5": {
192
+ "title": "McKinsey&Co: A Blueprint for Scaling Voluntary Carbon Markets to Meet the Climate Challenge.",
193
+ "author": "Blaufelder, C.; Levy, C.; Mannion, P.; Pinner, D.; and Weterings, J. 2021.",
194
+ "venue": "Accessed 31.05.2021.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "6": {
200
+ "title": "Biomass estimates for tropical forest.",
201
+ "author": "Brown, S.; and Iverson, L. 1992.",
202
+ "venue": "World Res. Rev., 4: 366\u2013383.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "7": {
208
+ "title": "Managing Forests for Climate Change Mitigation.",
209
+ "author": "Canadell, J. G.; and Raupach, M. R. 2008.",
210
+ "venue": "Science, 320: 1456\u20131457.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "8": {
216
+ "title": "The misunderstood sixth mass extinction.",
217
+ "author": "Ceballos, G.; and Ehrlich, P. 2018.",
218
+ "venue": "Science, 360: 1080.2\u20131081.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "9": {
224
+ "title": "Multi-Task Learning with Deep Neural Networks: A Survey.",
225
+ "author": "Crawshaw, M. 2020.",
226
+ "venue": null,
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "10": {
232
+ "title": "GainForest: Scaling Climate Finance for Forest Conservation using Interpretable Machine Learning on Satellite Imagery.",
233
+ "author": "Dao, D.; Cang, C.; Fung, C.; Zhang, M.; Pawlowski, N.; Gonzales, R.; Beglinger, N.; and Zhang, C. 2019.",
234
+ "venue": "ICML Climate Change AI workshop 2019.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "11": {
240
+ "title": "Multiresolution Multimodal Sensor Fusion for Remote Sensing Data With Label Uncertainty.",
241
+ "author": "Du, X.; and Zare, A. 2020.",
242
+ "venue": "IEEE Transactions on Geoscience and Remote Sensing, 58.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "12": {
248
+ "title": "Global Forest Resources Assessment 2020: Main report.",
249
+ "author": "FAO. 2020.",
250
+ "venue": "FAO.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "13": {
256
+ "title": "Measuring Tree Height with Remote Sensing\u2014A Comparison of Photogrammetric and LiDAR Data with Different Field Measurements.",
257
+ "author": "Ganz, S.; K\u00e4ber, Y.; and Adler, P. 2019.",
258
+ "venue": "Forests, 10: 694.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "14": {
264
+ "title": "What drives tropical deforestation?: a meta-analysis of proximate and underlying causes of deforestation based on subnational case study evidence.",
265
+ "author": "Geist, H. J.; and Lambin, E. F. 2001.",
266
+ "venue": "LUCC International Project Office, University of Louvain.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "15": {
272
+ "title": "Aboveground Live Woody Biomass Density.",
273
+ "author": "Global Forest Watch. 2019.",
274
+ "venue": "Dataset Accessed: 30.11.2021.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "16": {
280
+ "title": "Natural climate solutions.",
281
+ "author": "Griscom, B. W.; Adams, J.; Ellis, P. W.; Houghton, R. A.; Lomax, G.; Miteva, D. A.; Schlesinger, W. H.; Shoch, D.; Siikam\u00e4ki, J. V.; Smith, P.; Woodbury, P.; Zganjar, C.; Blackman, A.; Campari, J.; Conant, R. T.; Delgado, C.; Elias, P.; Gopalakrishna, T.; Hamsik, M. R.; Herrero, M.; Kiesecker, J.; Landis, E.; Laestadius, L.; Leavitt, S. M.; Minnemeyer, S.; Polasky, S.; Potapov, P.; Putz, F. E.; Sanderman, J.; Silvius, M.; Wollenberg, E.; and Fargione, J. 2017.",
282
+ "venue": "Proceedings of the National Academy of Sciences, 114(44): 11645\u201311650.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "17": {
288
+ "title": "Satellites could soon map every tree on Earth.",
289
+ "author": "Hanan, N. P.; and Anchang, J. Y. 2020.",
290
+ "venue": "Nature, 587.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "18": {
296
+ "title": "High-Resolution Global Maps of 21st-Century Forest Cover Change.",
297
+ "author": "Hansen, M. C.; Potapov, P. V.; Moore, R.; Hancher, M.; Turubanova, S. A.; Tyukavina, A.; Thau, D.; Stehman, S. V.; Goetz, S. J.; Loveland, T. R.; Kommareddy, A.; Egorov, A.; Chini, L.; Justice, C. O.; and Townshend, J. R. G. 2013.",
298
+ "venue": "Science, 342(6160): 850\u2013853.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "19": {
304
+ "title": "Managing uncertainty in carbon offsets: insights from California\u2019s standardized approach.",
305
+ "author": "Haya, B.; Cullenward, D.; Strong, A. L.; Grubert, E.; Heilmayr, R.; Sivas, D. A.; and Wara, M. 2020.",
306
+ "venue": "Climate Policy, 20(9): 1112\u20131126.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "20": {
312
+ "title": "Deep Residual Learning for Image Recognition.",
313
+ "author": "He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015.",
314
+ "venue": "arXiv:1512.03385.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "21": {
320
+ "title": "2019: Summary for Policymakers.",
321
+ "author": "IPCC. 2019.",
322
+ "venue": "In Shukla, P.; Skea, J.; Buendia, E. C.; Masson-Delmotte, V.; P\u00f6rtner, H.-O.; Roberts, D. C.; Zhai, P.; Slade, R.; Connors, S.; van Diemen, R.; Ferrat, M.; Haughey, E.; Luz, S.; Neogi, S.; Pathak, M.; Petzold, J.; Pereira, J. P.; Vyas, P.; Huntley, E.; Kissick, K.; Belkacemi, M.; and Malley, J., eds., Climate Change and Land: an IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems, 7\u201311.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "22": {
328
+ "title": "Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change.",
329
+ "author": "IPCC. 2021.",
330
+ "venue": "Cambridge University Press.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "23": {
336
+ "title": "Estimating Mangrove Tree Biomass and Carbon Content: A Comparison of Forest Inventory Techniques and Drone Imagery.",
337
+ "author": "Jones, A. R.; Raja Segaran, R.; Clarke, K. D.; Waycott, M.; Goh, W. S. H.; and Gillanders, B. M. 2020.",
338
+ "venue": "Frontiers in Marine Science, 6: 784.",
339
+ "url": null
340
+ }
341
+ },
342
+ {
343
+ "24": {
344
+ "title": "Caught in between: credibility and feasibility of the voluntary carbon market post-2020.",
345
+ "author": "Kreibich, N.; and Hermwille, L. 2021.",
346
+ "venue": "Climate Policy, 21(7): 939\u2013957.",
347
+ "url": null
348
+ }
349
+ },
350
+ {
351
+ "25": {
352
+ "title": "Machine Learning-based Estimation of Forest Carbon Stocks to increase Transparency of Forest Preservation Efforts.",
353
+ "author": "L\u00fctjens, B.; Liebenwein, L.; and Kramer, K. 2019.",
354
+ "venue": "2019 NeurIPS Workshop on Tackling Climate Change with Machine Learning.",
355
+ "url": null
356
+ }
357
+ },
358
+ {
359
+ "26": {
360
+ "title": "The global distribution and environmental drivers of aboveground versus belowground plant biomass.",
361
+ "author": "Ma, H.; Mo, L.; Thomas W. Crowther, D. S. M.; van den Hoogen, J.; Stocker, B. D.; Terrer, C.; and Zohner, C. M. 2021.",
362
+ "venue": "Nature Ecology & Evolution, 5: 1110\u20131122.",
363
+ "url": null
364
+ }
365
+ },
366
+ {
367
+ "27": {
368
+ "title": "Deep learning in remote sensing applications: A meta-analysis and review.",
369
+ "author": "Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; and Johnson, B. A. 2019.",
370
+ "venue": "ISPRS Journal of Photogrammetry and Remote Sensing, 152: 166\u2013177.",
371
+ "url": null
372
+ }
373
+ },
374
+ {
375
+ "28": {
376
+ "title": "Error propagation and scaling for tropical forest biomass estimates.",
377
+ "author": "Malhi, Y.; Phillips, O. L.; Chave, J.; Condit, R.; Aguilar, S.; Hernandez, A.; Lao, S.; and Perez, R. 2004.",
378
+ "venue": "Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 359(1443): 409\u2013420.",
379
+ "url": null
380
+ }
381
+ },
382
+ {
383
+ "29": {
384
+ "title": "A Novel Deep Learning Method to Identify Single Tree Species in UAV-Based Hyperspectral Images.",
385
+ "author": "Miyoshi, G. T.; Arruda, M. d. S.; Osco, L. P.; Marcato Junior, J.; Gon\u00e7alves, D. N.; Imai, N. N.; Tommaselli, A. M. G.; Honkavaara, E.; and Gon\u00e7alves, W. N. 2020.",
386
+ "venue": "Remote Sensing, 12(8).",
387
+ "url": null
388
+ }
389
+ },
390
+ {
391
+ "30": {
392
+ "title": "Tree species classification from airborne hyperspectral and LiDAR data using 3D convolutional neural networks.",
393
+ "author": "M\u00e4yr\u00e4, J.; Keski-Saari, S.; Kivinen, S.; Tanhuanp\u00e4\u00e4, T.; Hurskainen, P.; Kullberg, P.; Poikolainen, L.; Viinikka, A.; Tuominen, S.; Kumpula, T.; and Vihervaara, P. 2021.",
394
+ "venue": "Remote Sensing of Environment, 256: 112322.",
395
+ "url": null
396
+ }
397
+ },
398
+ {
399
+ "31": {
400
+ "title": "Using ICESat-2 to Estimate and Map Forest Aboveground Biomass: A First Example.",
401
+ "author": "Narine, L. L.; Popescu, S. C.; and Malambo, L. 2020.",
402
+ "venue": "Remote Sensing, 12(11).",
403
+ "url": null
404
+ }
405
+ },
406
+ {
407
+ "32": {
408
+ "title": "Estimating above ground biomass in a Salix plantation using high resolution UAV images.",
409
+ "author": "N\u00e5f\u00e4lt, S. 2018.",
410
+ "venue": "Student thesis series INES, Lund University:8963727.",
411
+ "url": null
412
+ }
413
+ },
414
+ {
415
+ "33": {
416
+ "title": "Accurate Estimation of Forest Carbon Stocks by 3-D Remote Sensing of Individual Trees.",
417
+ "author": "Omasa, K.; Qiu, G. Y.; Watanuki, K.; Yoshimi, K.; and Akiyama, Y. 2003.",
418
+ "venue": "Environmental Science & Technology, 37.",
419
+ "url": null
420
+ }
421
+ },
422
+ {
423
+ "34": {
424
+ "title": "Sourcebook for BioCarbon Fund Projects.",
425
+ "author": "Pearson, T.; Walker, S.; and Brown, S. 2005.",
426
+ "venue": "Accessed 15.09.2021 URL: https://winrock.org/document/sourcebook-for-land-use-land-use-change-and-forestry-projects/.",
427
+ "url": null
428
+ }
429
+ },
430
+ {
431
+ "35": {
432
+ "title": "Comparison of methods for measuring and assessing carbon stocks and carbon stock changes in terrestrial carbon pools. How do the accuracy and precision of current methods compare? A systematic review protocol.",
433
+ "author": "Petrokofsky, G.; Kanamaru, H.; Achard, F.; Goetz, S. J.; Joosten, H.; Holmgren, P.; Lehtonen, A.; Menton, M. C. S.; Pullin, A. S.; and Wattenbach, M. 2012.",
434
+ "venue": "Environmental Evidence, 1: 6.",
435
+ "url": null
436
+ }
437
+ },
438
+ {
439
+ "36": {
440
+ "title": "Planetary boundaries:exploring the safe operating space for humanity.",
441
+ "author": "Rockst\u00f6m, J.; Steffen, W.; K. Noone, \u00c1. P.; Chapin, F. S.; Lambin, E.; Lenton, T. M.; Scheffer, M.; Folke, C.; Schellnhuber, H.; Nykvist, B.; Wit, C. A. D.; Hughes, T.; S. van der Leeuw, H. R.; S\u00f6rlin, S.; Snyder, P. K.; R. Costanza, U. S.; Falkenmark, M.; Karlberg, L.; Corell, R. W.; Fabry, V. J.; Hansen, J.; Walker, B.; Liverman, D.; Richardson, K.; Crutzen, P.; and Foley, J. 2009.",
442
+ "venue": "Ecology and Society, 14: 32.",
443
+ "url": null
444
+ }
445
+ },
446
+ {
447
+ "37": {
448
+ "title": "Benchmark map of forest carbon stocks in tropical regions across three continents.",
449
+ "author": "Saatchi, S. S.; Harris, N. L.; Brown, S.; Lefsky, M.; Mitchard, E. T. A.; Salas, W.; Zutta, B. R.; Buermann, W.; Lewis, S. L.; Hagen, S.; Petrova, S.; White, L.; Silman, M.; and Morel, A. 2011.",
450
+ "venue": "Proceedings of the National Academy of Sciences, 108(24): 9899\u20139904.",
451
+ "url": null
452
+ }
453
+ },
454
+ {
455
+ "38": {
456
+ "title": "GlobBiomass - global datasets of forest biomass.",
457
+ "author": "Santoro, M. 2018.",
458
+ "venue": null,
459
+ "url": null
460
+ }
461
+ },
462
+ {
463
+ "39": {
464
+ "title": "The global forest above-ground biomass pool for 2010 estimated from high-resolution satellite observations.",
465
+ "author": "Santoro, M.; Cartus, O.; Carvalhais, N.; Rozendaal, D. M. A.; Avitabile, V.; Araza, A.; de Bruin, S.; Herold, M.; Quegan, S.; Rodr\u00edguez-Veiga, P.; Balzter, H.; Carreiras, J.; Schepaschenko, D.; Korets, M.; Shimada, M.; Itoh, T.; Moreno Mart\u00ednez, A.; Cavlovic, J.; Cazzolla Gatti, R.; da Concei\u00e7\u00e3o Bispo, P.; Dewnath, N.; Labri\u00e8re, N.; Liang, J.; Lindsell, J.; Mitchard, E. T. A.; Morel, A.; Pacheco Pascagaza, A. M.; Ryan, C. M.; Slik, F.; Vaglio Laurin, G.; Verbeeck, H.; Wijaya, A.; and Willcock, S. 2021.",
466
+ "venue": "Earth System Science Data, 13(8): 3927\u20133950.",
467
+ "url": null
468
+ }
469
+ },
470
+ {
471
+ "40": {
472
+ "title": "Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks.",
473
+ "author": "Schiefer, F.; Kattenborn, T.; Frick, A.; Frey, J.; Schall, P.; Koch, B.; and Schmidtlein, S. 2020.",
474
+ "venue": "ISPRS Journal of Photogrammetry and Remote Sensing, 170: 205\u2013215.",
475
+ "url": null
476
+ }
477
+ },
478
+ {
479
+ "41": {
480
+ "title": "Allometric models for estimating aboveground biomass of shade trees and coffee bushes grown together.",
481
+ "author": "Segura, M.; Kanninen, M.; and Su\u00e1rez, D. 2006.",
482
+ "venue": "Agroforestry Systems, 68: 143\u2013150.",
483
+ "url": null
484
+ }
485
+ },
486
+ {
487
+ "42": {
488
+ "title": "Terrestrial biodiversity threatened by increasing global aridity velocity under high-level warming.",
489
+ "author": "Shi, H.; Tian, H.; Lange, S.; Yang, J.; Pan, S.; Fu, B.; and Reyer, C. P. O. 2021.",
490
+ "venue": "Proceedings of the National Academy of Sciences of the United States of America (PNAS), 18: 36.",
491
+ "url": null
492
+ }
493
+ },
494
+ {
495
+ "43": {
496
+ "title": "Harmonized globadgleyl maps of above and belowground biomass carbon density in the year 2010.",
497
+ "author": "Spawn, S.; Sullivan, C.; and Lark, T. e. a. 2020.",
498
+ "venue": "Sci Data, 7: 112.",
499
+ "url": null
500
+ }
501
+ },
502
+ {
503
+ "44": {
504
+ "title": "Review on carbon storage estimation of forest ecosystem and applications in China.",
505
+ "author": "Sun, W.; and Liu, X. 2019.",
506
+ "venue": "Forest Ecosystems, 7: 4.",
507
+ "url": null
508
+ }
509
+ },
510
+ {
511
+ "45": {
512
+ "title": "Carbon stock assessment for a forest-to-coffee conversion landscape in Sumber-Jaya (Lampung, Indonesia): from allometric equations to land use change analysis.",
513
+ "author": "Van Noordwijk, M.; Rahayu, S.; Hairiah, K.; Wulan, Y.; Farida, A.; and Verbist, B. 2002.",
514
+ "venue": "Science in China, 45.",
515
+ "url": null
516
+ }
517
+ },
518
+ {
519
+ "46": {
520
+ "title": "Topics in optimal transportation.",
521
+ "author": "Villani, C. 2003.",
522
+ "venue": "58. American Mathematical Soc.",
523
+ "url": null
524
+ }
525
+ },
526
+ {
527
+ "47": {
528
+ "title": "NEON Crowns: a remote sensing derived dataset of 100 million individual tree crowns.",
529
+ "author": "Weinstein, B. G.; Marconi, S.; Bohlman, S.; Zare, A.; Singh, A.; Graves, S. J.; and White, E. 2020a.",
530
+ "venue": "bioRxiv.",
531
+ "url": null
532
+ }
533
+ },
534
+ {
535
+ "48": {
536
+ "title": "Individual tree-crown detection in RGB imagery using semi-supervised deep learning neural networks.",
537
+ "author": "Weinstein, B. G.; Marconi, S.; Bohlman, S.; Zare, A.; and White, E. 2019.",
538
+ "venue": "Remote Sensing, 11(11): 1309.",
539
+ "url": null
540
+ }
541
+ },
542
+ {
543
+ "49": {
544
+ "title": "Cross-site learning in deep learning RGB tree crown detection.",
545
+ "author": "Weinstein, B. G.; Marconi, S.; Bohlman, S. A.; Zare, A.; and White, E. P. 2020b.",
546
+ "venue": "Ecological Informatics, 56: 101061.",
547
+ "url": null
548
+ }
549
+ },
550
+ {
551
+ "50": {
552
+ "title": "Overstated carbon emission reductions from voluntary REDD+ projects in the Brazilian Amazon.",
553
+ "author": "West, T. A. P.; B\u00f6rner, J.; Sills, E. O.; and Kontoleon, A. 2020.",
554
+ "venue": "Proceedings of the National Academy of Sciences, 117(39): 24188\u201324194.",
555
+ "url": null
556
+ }
557
+ },
558
+ {
559
+ "51": {
560
+ "title": "Small-scale forestry and carbon offset markets: An empirical study of Vermont Current Use forest landowner willingness to accept carbon credit programs.",
561
+ "author": "White, A. E.; Lutz, D. A.; Howarth, R. B.; and Soto, J. R. 2018.",
562
+ "venue": "PLOS ONE, 13(8): 1\u201324.",
563
+ "url": null
564
+ }
565
+ },
566
+ {
567
+ "52": {
568
+ "title": "Predicting Forest Fire Using Remote Sensing Data And Machine Learning.",
569
+ "author": "Yang, S.; Lupascu, M.; and Meel, K. S. 2021.",
570
+ "venue": "arXiv:2101.01975.",
571
+ "url": null
572
+ }
573
+ },
574
+ {
575
+ "53": {
576
+ "title": "Carbon stock in different ages and plantation system of cocoa: allometric approach.",
577
+ "author": "Yuliasmara, F.; Wibawa, A.; and Prawoto, A. 2009.",
578
+ "venue": "Pelita Perkebunan (a Coffee and Cocoa Research Journal), 26.",
579
+ "url": null
580
+ }
581
+ },
582
+ {
583
+ "54": {
584
+ "title": "Multi-source remote sensing data fusion: status and trends.",
585
+ "author": "Zhang, J. 2010.",
586
+ "venue": "International Journal of Image and Data Fusion, 1.",
587
+ "url": null
588
+ }
589
+ },
590
+ {
591
+ "55": {
592
+ "title": "Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources.",
593
+ "author": "Zhu, X. X.; Tuia, D.; Mou, L.; Xia, G.-S.; Zhang, L.; Xu, F.; and Fraundorfer, F. 2017.",
594
+ "venue": "IEEE Geoscience and Remote Sensing Magazine, 5(4): 8\u201336.",
595
+ "url": null
596
+ }
597
+ }
598
+ ],
599
+ "url": "http://arxiv.org/html/2201.11192v2"
600
+ }
20241127/2204.02688v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2206.09906v2.json ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Achieving Dexterous Bidirectional Interaction in Uncertain Conditions for Medical Robotics",
3
+ "abstract": "Medical robotics can help improve the reach of healthcare services. A challenge for medical robots is their complex physical interaction. This work evaluates a recently introduced control architecture based on Fractal Impedance Control (FIC) in medical applications. The deployed FIC architecture is robust to delay between the master and the replica robots and can switch online between an admittance and impedance behaviour. Our experiments analyse three scenarios: teleoperated surgery, rehabilitation, and remote ultrasound scan. The experiments did not require any adjustment of the robot tuning, which is essential in medical applications where the operators do not have an engineering background. Our results show that it is possible to teleoperate the robot to perform remote occupational therapy, operate a scalpel, and use an ultrasound scan. However, our experiments also highlighted the need for a better robot embodiment to control the system precisely in 3D dynamic tasks.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Robot-mediated medical services have been identified as a possible solution to the ageing population in developed countries in the last few decades. An older population implies a lower active workforce and an increase in age-related diseases, increasing strain on the healthcare sector [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]. Additionally, as highlighted from the COVID-19 pandemic, reduced access to healthcare facilities can currently compromise healthcare quality. This problem was known in the sector, but it was not prioritised and was seen as a long-term problem because it affected only the population living in remote locations. The pandemic has revealed the short-term relevance of new technologies that can enhance the territorial permeability of these services.\nRehabilitation and robot-aided surgery are among the first applications in medical robotics[7 ###reference_b7###, 1 ###reference_b1###]. The rehabilitation robots have shown how the introduction of these technologies in the rehabilitation centre has allowed an increase of bandwidth and therapeutic improvements in the patients [7 ###reference_b7###, 1 ###reference_b1###, 3 ###reference_b3###, 4 ###reference_b4###]. Currently, multiple planar robots for upper-limb rehabilitation are available on the market, which can also be deployed at home or community centres [6 ###reference_b6###]. Concurrently, the knowledge gained for rehabilitation robots supported the development of assistive technologies for medical, civil, and industrial applications. These technologies aim to support pathological cases, but they also target the reduction of injuries in the healthy population [8 ###reference_b8###].\nSurgical robots are the other devices that immediately attracted the attention of researchers, which have been seen as an opportunity to allow doctors to operate on patients remotely [2 ###reference_b2###, 5 ###reference_b5###]. Endoscopic surgery also provided an ideal case study for robotics. Endoscopes were already an established device when roboticists approached the problem, providing minimally invasive access to internal organs, and they had established protocols and techniques [9 ###reference_b9###, 10 ###reference_b10###]. Therefore, medical robots could be developed to automate and improve an available technology, which has also increased the acceptability of these technologies in the medical community. An additional benefit of endoscopic surgery is the quasi-spherical operational field that can be projected on a flat screen without significantly impacting the operator\u2019s perception. More recently, the knowledge gained from developing co-bots, robots designed to share their workspace with humans, has also enabled the development of robots for orthopaedics surgery [11 ###reference_b11###]. In these systems, the doctor interacts with the end-effector to increase the quality of knee and hip prosthetics; however, the literature on these systems does not indicate a significant benefit of robot-aided surgery compared to traditional systems [12 ###reference_b12###].\nResearchers have recently looked into performing other types of medical interventions in teleoperation, exploiting needles and scalpels[2 ###reference_b2###, 5 ###reference_b5###]. Teleoperation presents unique challenges compared to autonomous manipulation. The robot must follow the operator\u2019s real-time commands without knowing their intentions while maintaining interaction stability and adhering to safety constraints. The intrinsic interaction complexity connected with the variegated mechanical properties of biological tissues poses a challenge to traditional interaction control approaches, which rely on contact models. These controllers also require extensive tuning for switching between operations, requiring application-specific knowledge and profound knowledge of the control architecture. Furthermore, the introduction of delays and the exponential increase of computational complexity when multi-arms are involved render extremely challenging the applicability of these methods in teleoperation [5 ###reference_b5###, 13 ###reference_b13###]. Therefore, these methods can potentially generate unsafe interaction due to the intrinsic energy tracking limitations due to the discrete nature of the virtual tank conservative energy [14 ###reference_b14###].\n###figure_1### Other applications of co-bots in robotics involve automated diagnostics (e.g., ultrasound scan) and robot-aided TMS (Transcranial Magnetic Stimulation) [15 ###reference_b15###, 16 ###reference_b16###]. The automation of diagnostic technology looks into the possibility of completely automating examinations such as the ultrasound scan, looking into machine learning and neural networks to identify anomalies in the image and perform a diagnosis. The application for TMS aims to improve the stimulation by improving the neural circuit targeting, as this technology\u2019s effectiveness depends on the selective stimulation of the nervous tissues using magnetic induction.\nRecently, our group has developed an impedance controller, called Fractal Impedance Controller (FIC), capable of robust interaction in unstructured environments without compromising the tracking accuracy [14 ###reference_b14###, 17 ###reference_b17###]. The FIC achieves these properties thanks to its passivity and path-independent observer, making it robust to delays and reducing bandwidth in state feedback [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###]. The FIC teleoperation architecture has been experimentally tested in teleoperation for delays up to at a feedback bandwidth of , showing the robustness of interaction with a significant reduction of manipulability [18 ###reference_b18###]. The passivity also allows multiple controllers to be superimposed without affecting their stability, enabling decoupling the control problems and reducing the computational complexity[21 ###reference_b21###]. Earlier teleoperation experiments showcase how the proposed architecture enables the remote operator to collaborate with another person interacting with the replica robots[18 ###reference_b18###, 19 ###reference_b19###, 22 ###reference_b22###, 23 ###reference_b23###].\nThis manuscript presents the preliminary evaluation of the performances of teleoperation architecture based on the FIC in using a scalpel, performing occupational therapy and an ultrasound scan (Fig. 1 ###reference_###). The scope is to understand the potential capabilities of the proposed method and identify the challenges to overcome. Section II ###reference_### gives an overview of the controller, which is the same (including gains) presented in [23 ###reference_b23###]. Section III ###reference_### describes the experiments and presents the results. Sections IV ###reference_### and V ###reference_### discuss experimental results and draw conclusions, respectively."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Control Overview",
15
+ "text": "The controller architecture comprises two sides with independent stability for their controllers, setting our control aside from other teleoperation architectures requiring their controllers\u2019 stability to be coupled [22 ###reference_b22###, 23 ###reference_b23###, 2 ###reference_b2###]. The master measures the motion of the user operator (e.g., medical personnel), using it as command input, and provides haptic and visual feedback from the replica side (Fig. 1 ###reference_###). The replica reproduces the operator\u2019s movements and interacts with the patients and environment. This controller can operate one or multiple arms across various tasks by changing the end-effector mounted on the robots, as shown on the right side of Fig. 1 ###reference_###. It is worth noting that the arms can be either controlled independently or synchronised; notwithstanding the control modality, the stability of the two arms is independent, and their movements are synchronised, giving coordinated states for both effort and trajectory. Thus, we will present all the elements of the architecture for one robotic arm.\nThe master controller has three elements as described in Fig. 1 ###reference_###. generate the desired pose for the replica depending on the selected control mode. We implemented the position and velocity modes. The position mode passes the pose error of the master to the controllers of the replica device, reproducing it at the end-effector. It allows better dexterity in controlling the robot, limiting the workspace. The velocity mode updates the reference pose of the replica device via an integration of the velocity recorded at the master end-effector. The desired replica pose is the output defined as followed depending on the control mode:\nwhere is the initial pose of the robot when the position mode is selected, is the twist of the master device, and is the controller time step.\n is the virtual haptic feedback () provided via the FIC-based controller NonLinear-PD (NLPD) formulated in [23 ###reference_b23###], and it mimics the nonlinear controller acting on the replica that deals with unexpected interactions. It provides the haptic perception of the wrench (i.e., vector of force and torques) exerted on the robot end-effector by the user command. It also enhances the haptic information beyond the interaction force recorded on the replica, being able to provide feedback when the limits of the replica workspace are reached. This feedback is summed up as the wrench recorded at the end-effector ( ), scaled by a gain that is controlled online by the user with the grasp-DoF of the Sigma-7 device.\nThe replica controller has two main components, as shown in Fig. 1 ###reference_###. The Force Controller (FC) provides an admittance controller capable of tracking a desired interaction force at the end-effector. The Motion Adaptation (MA) and Interaction Controler (IC) adapt the desired motion () to the robot\u2019s physical capabilities and the task requirements and, subsequently, generate the torque command for the Replica ().\nThe FC is an admittance controller that modifies the trajectory input by the user to account for the interaction at the robots\u2019 end-effectors, and it is based on the approach used in [23 ###reference_b23###]. It uses the the end-effector wrench ( ) and the joint kinematics () to estimate the displacement required to maintain the desired interaction force received by the MA.\nThe MA is performed with an algorithm called SEIKO Retargeting [24 ###reference_b24###, 23 ###reference_b23###]. This algorithm computes the desired whole-body configuration from the Cartesian input commands, solving a single iteration of Sequential Quadratic Programming (S-QP) at on the tangent space of the robot\u2019s trajectory. SEIKO Retargeting ensures that the next expected state is feasible (i.e., within the robots\u2019 kinematics and torque hardware limits) and does not pass through singularities. If any of these adverse conditions occur, the optimisation returns the feasible solution closest to the desired state.\nThe IC comprises a superimposition of five independent controllers, and all except the NLPD can be switched on and off without affecting the system\u2019s stability [25 ###reference_b25###, 23 ###reference_b23###]. However, they might impact the accuracy and responsiveness of the replica. These controllers are a feed-forward load compensation, a postural joint space PD controller, a nonlinear compensation of the robot dynamics, and a relative Cartesian controller. This last controller is turned on only for the bimanual experiments, and it enhances the arms coordination.\nThe multi-arm coordination can be switched on online, allowing the user to control multiple arms with a single haptic device. It is executed at the MA level, where the optimisation constraints are added to the conditions required to maintain the grip on the object. These constraints evaluate the contact forces with the object and the relative pose of the arms to maintain the grasp, derived by the grasp matrix and a simplified dynamic model of the object (i.e., geometry and mass)[23 ###reference_b23###]. Additionally, a fifth controller is turned on in the IC, which enforces the relative pose between the arms."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Experiments",
21
+ "text": "We have designed four experiments to evaluate the capabilities of the proposed method in medical robotics and identify limitations. The first experiment targets surgery, and it is specific to using a scalpel to cut a silicone model of the human skin (Fig. 2 ###reference_###a). The second experiment is a rehabilitation application, shown in Fig. 2 ###reference_###b. It evaluates the system\u2019s capabilities to be deployed as a physical interface between the patient and the therapist during occupational therapy. The third experiment assesses the capabilities of the proposed method when performing an ultrasound scan (Fig. 2 ###reference_###c). We used a phantom made of gelatin balloons (i.e., bladders, water and fruit. The fourth experiments evaluate the system\u2019s responsiveness in coordinated bi-manual manipulation of a fragile object (i.e., potato chip), evaluating the ability of the system to perform these tasks without reprogramming the controllers. However, it required the introduction of a soft force sensor at the end-effector (Fig. 2 ###reference_###.d) to enhance the perception of the interaction forces. We also want to remark that we are focusing on the linear components of the control because the angular components are expressed in quaternion, and there is no intuitive way to visualise the results.\n###figure_2###"
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Scalpel Experiment",
27
+ "text": "The two end-effectors in Fig. 2 ###reference_###a have been developed for this experiment. One end-effector holds the scalpel, and the other keeps in place the silicone skin during the cutting. The operator executes 16 cuts on the phantom, trying to proceed straight when crossing previously made incisions. The cross-incision is particularly interesting because a perpendicular cut weakens the phantom. This experiment aims to test the dexterity of the system during cutting, evaluate the impact of the system manipulability on the task, and the visual and haptic feedback performances.\nThe challenges of the scalpel experiment are the nonlinear soft-dynamics of interaction due both to the silicone of the phantom and the lateral flexibility of the scalpel blade, the 3D perception of the task before making contact, and the dexterity required to maintain the contact while executing long cuts in teleoperation.\n###figure_3### ###figure_4### ###figure_5### ###figure_6### This first experiment highlighted the difficulties in handling long-distance 3D movements with multi-camera views (Fig. 3(a) ###reference_sf1###). Such an interface for the operator does not allow the concurrent perception of the movements in the two planes. Notwithstanding, the situation improves once the scalpel makes contact with the object and the task acquires a predominant planar component. Fig. 4(a) ###reference_sf1### shows how the cuts have undulatory shapes around the segment connecting the start and the endpoints with deviations that can reach up to for longer cuts. However, the clean cuts on the material (Fig. 4(b) ###reference_sf2###), the precise straight cut in some directions, and the presence of the undulatory behaviour in others seem to suggest that these deviations are due to the manipulability of the robots along this direction.\nFinally, the operator also experienced difficulties with the haptic feedback for the left arm (i.e., hand end-effector), where there is the need for sustained interaction with the environment as shown in Fig. 5 ###reference_###. In contrast, this effect has a lower impact on the scalpel arm due to the reduced force peaks and shorter interactions\u2019 time, highlighted from the comparison of the signals\u2019 plot in Fig. 5 ###reference_###. Such a phenomenon can be explained by the low inertia of Sigma.7, which implies that the haptic feedback generates high tangential velocities in the master device. Thus, it requires active compensation from the user by increasing the interaction impedance and making the task tiring for the operator.\n###figure_7###"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Rehabilitation Experiment",
33
+ "text": "The rehabilitation experiment is designed to evaluate the stability of the architecture when rigidly connected to a human via a brace while executing an activity simulating occupational therapy, as shown in Fig. 3(b) ###reference_sf2###.\nThe challenge of this test is the continuous trade-off between admittance and impedance behaviour. For example, if the patient takes the lead, the replica has to be transparent and behave as an admittance, while it has to switch to an impedance behaviour when the therapist intervenes to assist the patient. Such trade-off usually is extremely difficult for controllers because having an admittance controller acting on top of a non-rigid impedance controller tends to amplify the drift in the controller observer and eventually lead to instability.\nThe experiment is divided into three tasks in Fig. 6 ###reference_###. The first investigates the transparency of the admittance controller to evaluate the level of compliance achievable without an additional end-effector force sensor by having the operator drive the robot to complete the task. The second task is about assistance with the operator assisting the patient in executing the task. Lastly, the third task is a disruptive interference from the operator, introducing perturbation to the user during the execution of the task.\n###figure_8### The norm of interaction forces () recorded in the experiment, the end-effector positions and the master controller linear command () are shown in Fig. 7 ###reference_###. The proposed method is capable of switching from the full admittance behaviour during independent movement where peaks are about . This occurs in the first minute of the experiment when is close to 0. When the operator assists or perturbs the motions, the master error increases, generating a virtual force on the user. The assisted movements end when . It is characterised by higher interaction forces than autonomous motions, reaching peaks of about . In contrast, in the perturbation phase, where there is an opposition to the subject\u2019s movements, the norm of the tangential force peaks.\n###figure_9###"
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-C Ultrasound Scan Experiment",
39
+ "text": "The ultrasound scan experiment evaluates the ability of the system to perform a remote diagnostic test, which requires the design of an end-effector for holding the ultrasound probe (Fig. 2 ###reference_###c). However, the available ultrasound scan does not allow remote control, so the experimental setup has been modified. This experiment is performed in the line of sight teleoperation. However, the phantom was placed above the operator\u2019s line of sight to hinder the perception and promote the use of the video feedback from the ultrasound scan. We use a phantom made of commercial food gelatin mixed with psyllium husk to enhance the contrast. We have three gelatin layers with different water components; the first has the recommended water-to-gelatin ratio. In the second layer, the amount of water is halved, and on the top layer, the water is reduced to one-third. Multiple props are suspended in the mix. There are bladders made of water balloons with grapes inside to mimic masses, and some fruit (grapes) is also distributed outside the bladder directly in the gelatine. It is worth noting that we have used a high-frequency probe that is not ideal for the quality of the image. However, it does not make any difference in evaluating the physical interaction stability and dexterity, which are the objectives of this experiment.\nFig. 8 ###reference_### shows the evolution of the interaction forces, the user input () and the stiffness of the replica arm, showing how the proposed method can dynamically adjust its stiffness to interact with the nonlinear environmental dynamics. This autonomous modulation of the robot impedance allows stabilisation of the interaction while maintaining the required dexterity of interaction to perform the scan. The video also allows us to appreciate how, once the contact is made with the phantom, the exploration can be driven mainly relying on the ultrasound monitor shown in Fig.9 ###reference_###. The main limitation of this experiment was that the available ultrasound did not allow remote adjusting of the image; thus, it required being physically close to the patient.\n###figure_10### ###figure_11###"
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-D Bimanual Telemanipulation Experiment",
45
+ "text": "The bimanual teleoperation experiments are designed to test the responsiveness and the accuracy of the coordination during bimanual teleoperation (Fig. 2 ###reference_###.d). We have chosen a potato chip because it is at the same time brittle, stiff, and light enough to make gravity a negligible component of the interaction forces. We introduce a soft end-effector capable of providing an indirect estimation of the interaction force. We have mounted a sensor developed by the Bristol Robotics Laboratory called TACTIP [26 ###reference_b26###]. It is based on a camera sensor placed inside a soft dome with a dotted geometric pattern, and the force estimation is obtained via the measurement of the pixel distance between the state of the deformed dome and the unperturbed state of the dotted geometrical pattern. The soft sensor detects subtle interaction forces, which could not be accurately estimated from the joint torques as in the previous experiments. The scope of this experiment is to test the system\u2019s transparent interaction performances by introducing an additional sensor at the robot end-effector.\n###figure_12### ###figure_13### ###figure_14### ###figure_15### Fig. 10 ###reference_### shows the different stages of the experiment, starting with the initial asymmetric contact made by the left arm, full contact once the right arm reached the potato chip, the user interaction with the object triggering the admittance behaviour of the controller, and the bi-manual telemanipulation showcasing the impedance behaviour of the proposed method. The interaction forces estimated from the joint torque compared with the position of the potato chip in Fig. 11 ###reference_### indicate that this is not a reliable way to estimate the interaction forces and the need to introduce the TACTIP end-effector for this task. We can observe multiple offsets in the contact forces estimated from the joints\u2019 torques (Fig. 11 ###reference_###) during and after contact with the objects. These offsets occur where the environmental interaction does not solely generate movements. This latter condition observable for about starting at when the for the two arms are equal. Therefore, our experiments show that the flexibility of the proposed architecture allows overcoming the sensibility of the integrated admittance controller via the mounting of an instrumented end-effector that does not require any specific skillset in robotics."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "IV Discussion",
51
+ "text": "The experimental results indicate that the proposed modular method is adaptable to multiple applications without tuning. The operator can change the target application by mounting the proper end-effector and selecting the associated architecture configuration. However, the modality selection is at the module level and does not require any tuning of the inner parameters of each module; thus, it is well suited for applications such as medical technologies where we have an expert operator with no engineering background.\nThe surgery experiments tested the possibility of establishing safe interaction with the soft tissues with a scalpel. However, the current limitations of the visual and haptic feedback need to be overcome before this technology can be comprehensively evaluated in experiments using more complex phantoms and biological samples. The deployment of virtual and augmented reality could help provide a better 3D perception, which will require studying the most-suited interface for providing comprehensive feedback and control of the system. Regarding the haptic feedback, employing the same robot in the master and the replica could help improve the user\u2019s feedback. This haptics is currently compromised by the high end-effector motions induced in the master device (Sigma.7) due to its lower end-effector inertia than the replica (Panda).\nThe rehabilitation experiment showcased how the controller\u2019s seamless trade-off between admittance and impedance behaviours allows robot-mediate collaboration between two human operators, which can also find application in other industries (e.g. manufacturing and logistics). Furthermore, it could also enable the deployment of commercial manipulators in rehabilitation, increasing the availability of robot-aided therapies and diversifying the market. Nevertheless, it also highlighted the same limitation in 3D perception in the visual feedback, which currently hinders assistance from the remote operator.\nThe ultrasound scan experiment showcased that it is possible to accurately control the probe for conducting a scan. The feedback from the scan monitor is sufficient to conduct the test once contact is made with the tissue; however, the 3D visual perception is essential to make contact with the desired anatomical district, which was possible thanks to the line of sight setup used for this experiment. The main limitation to the deployment of this technology is the lack of remote control for the ultrasound scan, which limits the distance of the operator from the patient to the length of the probe. Nevertheless, this application is currently the closest to eventual clinical testing among the evaluated scenarios.\nThe bimanual telemanipulation experiments tested the possibility of having robot-mediated collaboration while manipulating fragile objects by introducing a sensorised end-effector to detect the low interaction forces at the end-effector. This application is still in early development, but the sensorised end-effectors could be used for assistive technologies and applications requiring the handling of delicate objects, such as in chemical laboratories. While all experiments presented in this work were conducted locally, [20 ###reference_b20###] demonstrated that our system can readily be applied to long-distance teleoperation over the internet, including multi-camera visual feedback with a latency of approximately .\nLastly, another major limitation encountered in all the experiments is the limited embodiment of the remote arm, which makes it difficult for the operator to understand the manoeuvrability and the residual range of motion dictated both by the robot kinematics and the presence of objects. A possible option to enhance the embodiment is to exploit the virtual haptic controller in the master () to provide such information on the residual range of motion as a virtual resistive force."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion",
57
+ "text": "We presented a preliminary evaluation of a modular control architecture that enables the superimposition of manipulation and teleoperation in medical applications. Our experiments show that this method can provide robust physical interaction in a variegated set of scenarios without requiring a specialised robotic skill set to be reprogrammed. However, they also show perception issues in visual and haptic feedback, and they need to be improved before clinical testing. The visual feedback from a multi-camera view is not ideal for 3D dynamic tasks, which could be improved with an augmented reality interface. The haptic feedback is not ideal due to the gap of end-effector inertia between the master and the replica robots used in our experimental setup."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {},
62
+ "image_paths": {
63
+ "1": {
64
+ "figure_path": "2206.09906v2_figure_1.png",
65
+ "caption": "Figure 1: On the master side, there are the operator PC and the haptic feedback devices (Sigma.7, Force Dimension Inc.). On the Replica side, 7-dof torque-controlled arms (Panda, Franka Emika GmbH) are tested in scenarios targeting surgery, rehabilitation, and diagnostics.\nThe controller of the master has three elements. TMsubscriptTM\\text{T}_{\\text{M}}T start_POSTSUBSCRIPT M end_POSTSUBSCRIPT is the module that transforms the motion of the master (\ud835\udc99Msubscript\ud835\udc99M\\bm{x}_{\\text{M}}bold_italic_x start_POSTSUBSCRIPT M end_POSTSUBSCRIPT) in the desired pose for the replica (\ud835\udc99dsubscript\ud835\udc99d\\bm{x}_{\\text{d}}bold_italic_x start_POSTSUBSCRIPT d end_POSTSUBSCRIPT). CMsubscriptCM\\text{C}_{\\text{M}}C start_POSTSUBSCRIPT M end_POSTSUBSCRIPT is a controller providing virtual haptic feedback (hMsubscript\u210eMh_{\\text{M}}italic_h start_POSTSUBSCRIPT M end_POSTSUBSCRIPT) to provide additional information to the user (e.g., workspace boundaries). KH\u2208[0,1]\u2282\u211dsubscriptKH01\u211d\\text{K}_{\\text{H}}\\in[0,1]\\subset\\mathbb{R}K start_POSTSUBSCRIPT H end_POSTSUBSCRIPT \u2208 [ 0 , 1 ] \u2282 blackboard_R is the gain applied to the wrench recorded at the end-effector of the replica robots (hesubscript\u210eeh_{\\text{e}}italic_h start_POSTSUBSCRIPT e end_POSTSUBSCRIPT).\nThe controller of the replica has two elements. FC is the force controller that can be turned on when required, introducing an admittance controller on top of the low-level Interaction Controller (IC). MA & IC is a module composed of two components. The first element is the Motion Adaptation (MA) performed by an S-QP optimisation to guarantee that the desired trajectory respects the physical limitation of the robot (e.g., power limits and singularities) and the task (e.g., holding an object in bimanual manipulation). The second element is the IC that generates the torque commands to track the desired motion produced by the MA.\nIt is worth remarking that in our experiments, the patients are substituted by two phantoms and a researcher, and another researcher acts as medical personnel.",
66
+ "url": "http://arxiv.org/html/2206.09906v2/x1.png"
67
+ },
68
+ "2": {
69
+ "figure_path": "2206.09906v2_figure_2.png",
70
+ "caption": "Figure 2: The proposed method has been used in multiple applications just by changing the end-effectors without requiring controller tuning. a) The hand end-effector used to hold the phantom during the cutting is mounted on the left arm and the support for the scalpel is on the right arm. b) The right arm has been equipped with a brace that is secure to the subject\u2019s arm with velcro straps. c) A vice-like end-effector is mounted on the right arm to secure the ultrasound probe to the robot. d) Two TACTIP sensors developed from the Bristol Robotics Laboratory [26] have been mounted on the two robots to enable the bimanual telemanipulation of the potato chip.",
71
+ "url": "http://arxiv.org/html/2206.09906v2/x2.png"
72
+ },
73
+ "3(a)": {
74
+ "figure_path": "2206.09906v2_figure_3(a).png",
75
+ "caption": "(a) Scalpel experiment\nFigure 3: Operator point of view for the scalpel and rehabilitation experiments.",
76
+ "url": "http://arxiv.org/html/2206.09906v2/x3.png"
77
+ },
78
+ "3(b)": {
79
+ "figure_path": "2206.09906v2_figure_3(b).png",
80
+ "caption": "(b) Rehabilitation experiment.\nFigure 3: Operator point of view for the scalpel and rehabilitation experiments.",
81
+ "url": "http://arxiv.org/html/2206.09906v2/x4.png"
82
+ },
83
+ "4(a)": {
84
+ "figure_path": "2206.09906v2_figure_4(a).png",
85
+ "caption": "(a)\nFigure 4: a) The cut marks on the silicone phantom show that it is difficult to proceed on a straight line. In addition, the deviation has peaks of a few millimetres, indicating the need to improve the system\u2019s performance on this task. b) The margins of the cut marks are needed, showing that the robot can robustly sustain contact with the phantom during the incision.",
86
+ "url": "http://arxiv.org/html/2206.09906v2/x5.png"
87
+ },
88
+ "4(b)": {
89
+ "figure_path": "2206.09906v2_figure_4(b).png",
90
+ "caption": "(b)\nFigure 4: a) The cut marks on the silicone phantom show that it is difficult to proceed on a straight line. In addition, the deviation has peaks of a few millimetres, indicating the need to improve the system\u2019s performance on this task. b) The margins of the cut marks are needed, showing that the robot can robustly sustain contact with the phantom during the incision.",
91
+ "url": "http://arxiv.org/html/2206.09906v2/x6.png"
92
+ },
93
+ "5": {
94
+ "figure_path": "2206.09906v2_figure_5.png",
95
+ "caption": "Figure 5: The force data for the scalpel experiments show that the robots are capable of sufficient force to hold the phantom down during cutting and can safely pass the peaks of force encountered during the cutting on the scalpel. The last two trials were conducted to check the performance in executing cross-cutting tests, and they were executed without changing the controller\u2019s parameters.",
96
+ "url": "http://arxiv.org/html/2206.09906v2/x7.png"
97
+ },
98
+ "6": {
99
+ "figure_path": "2206.09906v2_figure_6.png",
100
+ "caption": "Figure 6: Snapshots of the master and the replica robots taken during assistance in the rehabilitation experiments.",
101
+ "url": "http://arxiv.org/html/2206.09906v2/x8.png"
102
+ },
103
+ "7": {
104
+ "figure_path": "2206.09906v2_figure_7.png",
105
+ "caption": "Figure 7: The norm of the force vector recorded in the first minute of the experiment shows that the robot can follow the patient movements with peak forces of about 8 Ntimes8newton8\\text{\\,}\\mathrm{N}start_ARG 8 end_ARG start_ARG times end_ARG start_ARG roman_N end_ARG once the 2 Ntimes2newton2\\text{\\,}\\mathrm{N}start_ARG 2 end_ARG start_ARG times end_ARG start_ARG roman_N end_ARG offset is accounted for. The forces recorded during assistance reach peaks of 20 Ntimes20newton20\\text{\\,}\\mathrm{N}start_ARG 20 end_ARG start_ARG times end_ARG start_ARG roman_N end_ARG, and they further increase close to 40 Ntimes40newton40\\text{\\,}\\mathrm{N}start_ARG 40 end_ARG start_ARG times end_ARG start_ARG roman_N end_ARG in the perturbation phase, which occurred for the last minute of the experiment.",
106
+ "url": "http://arxiv.org/html/2206.09906v2/x9.png"
107
+ },
108
+ "8": {
109
+ "figure_path": "2206.09906v2_figure_8.png",
110
+ "caption": "Figure 8: The forces, the tracking error and the end-effector stiffness of the replica robot during the ultrasound scan. It shows that the proposed method can adapt the impedance behaviour to adapt the changing non-linear dynamics at the end-effector.",
111
+ "url": "http://arxiv.org/html/2206.09906v2/x10.png"
112
+ },
113
+ "9": {
114
+ "figure_path": "2206.09906v2_figure_9.png",
115
+ "caption": "Figure 9: A screenshot of the ultrasound scan shows a water bladder with inside a grape.",
116
+ "url": "http://arxiv.org/html/2206.09906v2/x11.png"
117
+ },
118
+ "10(a)": {
119
+ "figure_path": "2206.09906v2_figure_10(a).png",
120
+ "caption": "(a) Object handover\nFigure 10: Snapshots from the Bi-manual telemanipulation experiments show the experiment\u2019s different phases. The end-effector mounted on the robot replaces the admittance controller, and the nonlinear dynamics of the dome substitutes the model-based state estimator.",
121
+ "url": "http://arxiv.org/html/2206.09906v2/x12.png"
122
+ },
123
+ "10(b)": {
124
+ "figure_path": "2206.09906v2_figure_10(b).png",
125
+ "caption": "(b) End-effector admittance driven interaction\nFigure 10: Snapshots from the Bi-manual telemanipulation experiments show the experiment\u2019s different phases. The end-effector mounted on the robot replaces the admittance controller, and the nonlinear dynamics of the dome substitutes the model-based state estimator.",
126
+ "url": "http://arxiv.org/html/2206.09906v2/x13.png"
127
+ },
128
+ "10(c)": {
129
+ "figure_path": "2206.09906v2_figure_10(c).png",
130
+ "caption": "(c) Teleoperated impedance driven bi-manual manipulation\nFigure 10: Snapshots from the Bi-manual telemanipulation experiments show the experiment\u2019s different phases. The end-effector mounted on the robot replaces the admittance controller, and the nonlinear dynamics of the dome substitutes the model-based state estimator.",
131
+ "url": "http://arxiv.org/html/2206.09906v2/x14.png"
132
+ },
133
+ "11": {
134
+ "figure_path": "2206.09906v2_figure_11.png",
135
+ "caption": "Figure 11: On top, the force at the end-effector is estimated from the joints\u2019 torques. On the bottom, the expected chip position before the contact and chip position after the contact. The plots highlight the need for the additional sensor at the end-effector. The differential interaction with the two arms is barely detectable and sometimes presents a bias in the equilibrium, as can be seen at t=100 s\ud835\udc61times100secondt=$100\\text{\\,}\\mathrm{s}$italic_t = start_ARG 100 end_ARG start_ARG times end_ARG start_ARG roman_s end_ARG.",
136
+ "url": "http://arxiv.org/html/2206.09906v2/x15.png"
137
+ }
138
+ },
139
+ "validation": true,
140
+ "references": [],
141
+ "url": "http://arxiv.org/html/2206.09906v2"
142
+ }
20241127/2211.01974v3.json ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Discrete approximations to Dirichlet and Neumann Laplacians on a half-space and norm resolvent convergence",
3
+ "abstract": "We extend recent results on discrete approximations of the Laplacian in with norm resolvent convergence to the corresponding results for Dirichlet and Neumann Laplacians on a half-space. The resolvents of the discrete Dirichlet/Neumann Laplacians are embedded into the continuum using natural discretization and embedding operators. Norm resolvent convergence to their continuous counterparts is proven with a quadratic rate in the mesh size. These results generalize with a limited rate to also include operators with a real, bounded, and H\u00f6lder continuous potential, as well as certain functions of the Dirichlet/Neumann Laplacians, including any positive real power.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Let be the Dirichlet Laplacian and let be the Neumann Laplacian on the half-space , and let . Let and be the standard finite difference discretizations of and , defined on with a mesh size ; see section 2.3 ###reference_### for the precise definitions.\nUsing suitable embedding operators and discretization operators (see section 2.2 ###reference_###), we prove the following type of norm resolvent convergence with an explicit rate in the mesh size.\nLet be compact. Then there exists such that\nand\nfor and .\nNorm resolvent convergence was first shown for discrete approximations of the Laplacian on in [10 ###reference_b10###] and was extended to classes of Fourier multipliers in [2 ###reference_b2###]. Recently norm resolvent convergence of discrete approximations to other operators have been considered as well, such as discrete Dirac operators in [3 ###reference_b3###] and quantum graph Hamiltonians in [4 ###reference_b4###].\nWe prove the above result as Theorem 3.2 ###reference_theorem2### in section 3 ###reference_###, and also prove several extensions to this result. In section 3.1 ###reference_### we add a real, bounded, and H\u00f6lder continuous potential to and , and add a discrete potential with to and . The norm resolvent estimates with a potential are given in Theorem 3.6 ###reference_theorem6### with a rate that now depends explicitly on the H\u00f6lder exponent for . Such norm resolvent convergence implies much improved spectral results compared to e.g. strong resolvent convergence. This includes convergence of the spectrum in a local Hausdorff distance [2 ###reference_b2###, Section 5].\nFinally in section 3.2 ###reference_### we prove norm resolvent estimates between and , and between and , defined via the functional calculus for certain functions that have also been considered in [2 ###reference_b2###] for estimates on the full space . The results are given in Theorem 3.10 ###reference_theorem10###. As an example, this includes for any positive real power . This example leads to norm resolvent estimates with a rate of . Fractional Laplacians on a half-space (or general domains) with Dirichlet and Neumann boundary conditions have been considered by several authors. See e.g. [1 ###reference_b1###, 5 ###reference_b5###, 7 ###reference_b7###, 8 ###reference_b8###] for some recent results. However, results are scarce for discrete approximations of such operators."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": "We give the results in dimensions . The case is obtained by a simple modification of the arguments below.\nLet and . For we write with and .\nThe half-space is denoted by .\nFor the reflection of in the hyperplane\n is denoted by\nFor we write with and . We write\nfor the discrete half-space. We denote the reflection in the discrete hyperplane by"
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Extension and restriction operators",
21
+ "text": "The continuous Hilbert spaces are denoted by\nIn analogy with the even-odd decomposition of functions in dimension one we introduce the reflection-even and reflection-odd functions in by defining\nand\nsuch that as an orthogonal direct sum.\nThe discrete Hilbert spaces are given by\nwith norms\nNotice that we use the index and in the notation for and . The dependence on the mesh size is given by the subscript .\nThe reflection-even and reflection-odd sequences are defined by\nand\nWe have as an orthogonal direct sum.\nThe reflection-odd extension operator and reflection-even extension operator are given by\nIn the discrete case the reflection-odd extension operator is given by\nThe discrete reflection-even extension operator\n is defined by\nThe natural restriction operators onto the half-spaces are denoted by\nObviously we have and , where we also introduced the notation for the identity operators on and , respectively."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Embedding and discretization operators",
27
+ "text": "In [2 ###reference_b2###] embedding and discretization operators were defined using a pair of biorthogonal Riesz sequences. Here we consider only the special case of an orthogonal sequence, as in [10 ###reference_b10###], but with the additional assumption that the generating function is reflection-even.\nAssume such that is an orthonormal sequence in .\nDefine\nSince is assumed reflection-even we have the\nimportant property\nDefine the embedding operators by\nFrom Assumption 2.1 ###reference_theorem1### it follows that \nis an orthonormal sequence, hence that is isometric.\nThe discretization operators are given by . With the convention that inner products are linear in the second entrance, we explicitly have\nLet us note that (2.1 ###reference_###) implies , , , and .\nThe half-space embedding operators are defined as\nThe operators and are isometric, as can be seen from the following computation. Let and use that ,\nA similar computation holds for .\nThe half-space discretization operators\n\nare defined as\nNote that . is the orthogonal projection onto in and is the orthogonal projection onto in ."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Laplacians",
33
+ "text": "Let be the Laplacian in with domain .\nThe Dirichlet Laplacian on is defined as the positive self-adjoint operator given by the Friedrichs extension of . Equivalently, is the variational operator associated with the triple , where the sesquilinear form is\nBy [6 ###reference_b6###, Theorem 9.11] the domain of on a half-space simplifies to\nwhere is the Dirichlet trace operator.\nNext we define the Neumann Laplacian on as the positive self-adjoint variational operator associated with the triple . On a half-space its domain simplifies via [6 ###reference_b6###, Theorem 9.20] to\nwhere is the Neumann trace operator.\nFrom [6 ###reference_b6###, Theorem 9.2] the trace maps satisfy for and .\nWe need the following lemma. The result is a consequence of e.g. [9 ###reference_b9###, Proposition 2.2]. We give a shorter proof for the sake of completeness.\nLet . Then and . Furthermore, for and all we have\nLet . Then and . Furthermore, for and all we have\n(i): Let .\nSince is a core for , we can\nfind a sequence such that\n and in , as .\nWe have , such that and in , as . Note that\nsince commutes with orthogonal coordinate transformations and since is supported in , i.e. away from the hyperplane . Thus\nin . Since is a closed operator we conclude that\n and . The second part of the statement follows by using for .\n(ii): Let . Restrictions of to either side of has coinciding Dirichlet and Neumann traces, so at least . We can approximate in by a sequence with . Now implies the identity\nHowever we have that , which has a zero-extension . Since\nthen and as a consequence , since there was no contention regarding the square integrability of the other partial derivatives. The rest of the proof follows by using that on commutes with orthogonal coordinate transformations, and that for .\n\u220e\nThe discrete Laplacian on is given by\nHere denotes the canonical basis for .\nThe discrete Dirichlet Laplacian on is given by\nLet . Then using the definitions\none can verify that and then\nThe discrete Neumann Laplacian on is given by\nLet . Similar to the above, and then\nSince we use homogeneous Dirichlet and Neumann conditions, the discrete Laplacians have a very similar finite difference structure. The discrete Neumann Laplacian only differs from the discrete Dirichlet Laplacian at the indices where . Here the contributions from the boundary conditions either mean that (Dirichlet case) or that (Neumann case). This subtle difference also implies the connections to odd and even reflections in (2.4 ###reference_###) and (2.5 ###reference_###)."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Results",
39
+ "text": "Additional assumptions on the function are needed to obtain our results, cf. [2 ###reference_b2###, Assumption 2.8]\nor [10 ###reference_b10###, Assumption B]. Let denote the Fourier transform of , defined as\nLet satisfy Assumption 2.1 ###reference_theorem1### and assume that\n is essentially bounded.\nAssume there exists such that\nLet , , , and be as above, with satisfying\nAssumption 3.1 ###reference_theorem1###.\nLet be compact. Then there exists such that\nand\nfor and .\nLet . Then\nWe have , since is a reflection-odd function for all .\nThus using (2.4 ###reference_###) we get\nNow , since is a reflection-odd sequence. Thus we have shown\nUsing this result together with (2.2 ###reference_###) we have shown that\nThus we can use the results in [2 ###reference_b2###] or [10 ###reference_b10###] to obtain (3.1 ###reference_###).\nTo prove (3.2 ###reference_###) note that and , and then use\n(2.3 ###reference_###) instead of (2.2 ###reference_###) and (2.5 ###reference_###) instead of (2.4 ###reference_###). This leads to:\nwhich together with the results in [2 ###reference_b2###] or [10 ###reference_b10###] completes the proof of (3.2 ###reference_###).\n\u220e"
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Adding a potential",
45
+ "text": "Next we add a potential. To obtain the results we introduce two assumptions.\nLet . Assume that there exists such that\nLet be a bounded function which is uniformly H\u00f6lder continuous of order .\nNote that denotes the closed half-space, so the conditions hold up to the boundary.\nLet satisfy Assumption 3.4 ###reference_theorem4###. Then is bounded and uniformly H\u00f6lder continuous of order on .\nBoundedness is clear, and for the H\u00f6lder continuity we only need to consider points such that . Now Assumption 3.4 ###reference_theorem4### and imply\nWe define the discretized potential as , , . Then we define and on , and\n and on .\nLet , , , and be as above, with satisfying\nAssumptions 3.1 ###reference_theorem1### and 3.3 ###reference_theorem3###. Let satisfy Assumption 3.4 ###reference_theorem4###. Define\nLet be compact. Then there exists such that\nand\nfor and .\nLet and . Then\nand\nThus we have as operators from to ,\nwhere denotes the operator of multiplication in by\n, .\nLet on\n. Then combining the above result with\nthe arguments leading to (2.2 ###reference_###) we get for\nWe can repeat these arguments in the discrete case, leading to\nfor . Here we have defined on\n. Note that and may differ only at . Thus replacing by introduces an error of order , due to Assumption 3.4 ###reference_theorem4###, and this error can be absorbed in the final estimate below.\nRepeating the computations in the proof of Theorem 3.2 ###reference_theorem2### we get\nfor\nWe can then use [2 ###reference_b2###, Theorem 4.4] to complete the proof.\nThe proof for the Neumann Laplacian is analogous, using instead that , which for and gives\nThis leads to\nwhich can also be estimated by [2 ###reference_b2###, Theorem 4.4].\n\u220e"
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Functions of Dirichlet and Neumann Laplacians",
51
+ "text": "Now we extend the approximation results given in Theorem 3.2 ###reference_theorem2### to functions of the Dirichlet and Neumann Laplacians on the half-space. Let be a Borel function. Using the functional calculus we can define the operators , , , and .\nWe need the following lemma, which is an immediate consequence of [11 ###reference_b11###, Proposition 5.15]; see also [9 ###reference_b9###]. For operators and the notation means that is an extension of .\nFor assume that is a self-adjoint operator on a Hilbert space . Assume that is a bounded operator such that\nLet be a Borel function on . Then we have\nIf is a bounded function then equality holds in (3.5 ###reference_###).\nIn the following assumption the parameters are chosen to be compatible with the ones in [2 ###reference_b2###, Assumption 3.1].\nAssume\nLet be a continuous function which is continuously differentiable on and satisfies the following conditions:\n,\nthere exist and such that\n for .\nthere exists such that for .\nWe omit the straightforward proof of the following lemma.\nLet satisfy Assumption 3.8 ###reference_theorem8### with parameters and . Define , . Then satisfies Assumption 3.1 in [2 ###reference_b2###] with the same parameters and .\nNext we define\nUsing these definitions it follows that is the Fourier multiplier with symbol on , and is the Fourier multiplier with symbol on . The operators and on , and the operators\n and on , are defined using the functional calculus.\nWe have the following extension of Theorem 3.2 ###reference_theorem2###.\nLet satisfy Assumption 3.8 ###reference_theorem8### with parameters and . Let\nLet , , , and be as above, with satisfying Assumption 3.1 ###reference_theorem1###. Let be compact. Then there exists such that\nand\nfor and .\nWe prove the result for the Dirichlet Laplacians.\nAssumption 3.8 ###reference_theorem8### and Lemma 3.9 ###reference_theorem9### together with [2 ###reference_b2###, Proposition 3.5] imply that we have the estimate\nfor and , with satisfying the assumption in the theorem.\nCombine Lemma 2.2 ###reference_theorem2### with Lemma 3.7 ###reference_theorem7### to get the result\nAnalogously, using (2.4 ###reference_###) and Lemma 3.7 ###reference_theorem7### we get\nUsing the results (3.8 ###reference_###)\u2013(3.10 ###reference_###) we can repeat the arguments in the proof of Theorem 3.2 ###reference_theorem2### to get the result in the Dirichlet case. The proof in the Neumann case is almost the same, so we omit it.\n\u220e\nBy repeating the proof of Theorem 3.6 ###reference_theorem6###, we may also add a potential to the operators and and add a discrete potential to the operators and . The resulting estimates, replacing those in (3.6 ###reference_###) and (3.7 ###reference_###), will have the rate .\nOf particular interest are the functions that give the powers of the Laplacian .\nLet and define , . Then and . For we can take and to satisfy the conditions in Assumption 3.8 ###reference_theorem8###. Then the estimate (3.8 ###reference_###) holds with .\nFor the conditions in Assumption 3.8 ###reference_theorem8### are satisfied with and . We get for . For we can use the result in [2 ###reference_b2###, Proposition 3.11] which yields the estimate (3.8 ###reference_###) for and with .\nWe summarize the results above as a Corollary to both Theorem 3.10 ###reference_theorem10### and the results in [2 ###reference_b2###].\nLet , , . Then the estimates (3.6 ###reference_###) and (3.7 ###reference_###) hold for .\nThe operators defined here do not agree with the fractional Dirichlet Laplacians on a half-space defined in [7 ###reference_b7###, 8 ###reference_b8###].\nLet , then Lemmas 2.2 ###reference_theorem2### and 3.7 ###reference_theorem7### imply , such that\n. Whereas in [7 ###reference_b7###, 8 ###reference_b8###] the definition is based on the operator applied to suitable functions in , where is the operator for extension by zero. Hence the two approaches differ by the type of extension operator that is used.\nThis research is partially supported by grant 8021\u201300084B from Independent Research Fund Denmark | Natural Sciences."
52
+ }
53
+ ],
54
+ "appendix": [],
55
+ "tables": {},
56
+ "image_paths": {},
57
+ "validation": true,
58
+ "references": [],
59
+ "url": "http://arxiv.org/html/2211.01974v3"
60
+ }
20241127/2211.15656v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2212.11143v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2212.11571v2.json ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Scalable Primal Decomposition Schemes for Large-Scale Infrastructure Networks",
3
+ "abstract": "The operation of large-scale infrastructure networks requires scalable optimization schemes.\nTo guarantee safe system operation, a high degree of feasibility in a small number of iterations is important. Decomposition schemes can help to achieve scalability. In terms of feasibility, however, classical approaches such as the alternating direction method of multipliers (ADMM) often converge slowly. In this work, we present primal decomposition schemes for hierarchically structured strongly convex QPs.\nThese schemes offer high degrees of feasibility in a small number of iterations in combination with global convergence guarantees. We benchmark their performance against the centralized off-the-shelf interior-point solver Ipopt and ADMM on problems with up to 300,000 decision variables and constraints. We find that the proposed approaches solve problems as fast as Ipopt, but with reduced communication and without requiring a full model exchange. Moreover, the proposed schemes achieve a higher accuracy than ADMM.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The operation of infrastructure networks such as power systems, district heating grids or gas networks is challenging. In many cases, these networks are large and composed of many complex subsystems such as lower-level networks or buildings. Operation is often based on numerical optimization due to its flexibility and recent advances in solver development, which allows to solve large-scale problems quickly and to a high accuracy. For large networks, however, a centralized solution is often not desirable since, a), the problem becomes computationally challenging, even with state-of-the-art solvers; b), information collection in a central entity should be avoided due to confidentiality and privacy concerns, and, c), the responsibility for operation and updates in modeling should stay mainly in the subsystems.\nOne line of research addresses the above challenges via aggregation.\nHere, the idea is to simplify the subproblems by projecting the constraint set on the coupling variables of the infrastructure network.\nExamples for this can be found for power systems [Capitanescu2018, Kalantar-Neyestanaki2020].\nA drawback of this approach is a loss of optimality.\nMoreover, aggregation is often not straightforward, feasibility is hard to guarantee and disaggregation requires solving additional local optimization problems.\nA second line of research is based on distributed optimization. Prominent approaches are primal and dual first-order algorithms such as Lagrangian dual decomposition, the Alternating Direction Method of Multipliers (ADMM) [Everett1963, Boyd2011], and primal (sub)-gradient-based schemes [Nedic2017, Ryu2022]. Application examples range from the operation of power systems [Erseghe2014, Kim2000], over gas networks [Shin2021], district heating systems [Huang2017, Cao2019], to water networks [Coulbeck1988].\nWith their at most linear rate of convergence, these approaches often require many iterations to converge even for a modest solution quality.\nThis is often prohibitive for real-time implementation.\nDistributed second-order methods exhibit faster convergence.\nHere, classical approaches aim at decomposing the block-structure of the Karush-Kuhn-Tucker (KKT) system within interior-point algorithms [Chiang2014, Zavala2008a] or sequential quadratic programming [Varvarezos1994].\nAlternative second-order methods based on augmented Lagrangians can be found in [Engelmann2019c, Houska2016]. These approaches typically require an expensive central coordination, although it is possible to partially alleviate computation by decentralizing the Newton-steps [Engelmann2020b, Engelmann2021a, Stomberg2022a].\nPrimal decomposition schemes come with the advantage a high degree of feasibility and optimality in a small number of iterations [Geoffrion1970, DeMiguel2006, DeMiguel2008].\nFor achieving this, they require a hierarchical problem structure, i.e. a star as the underlying graph.\nIn this sense, they are more restrictive than the aforementioned approaches.\nIn infrastructure networks hierarchical problem structures are common, however.\nThe main idea of primal decomposition is to construct lower-level problems coordinated by one upper-level problem, where the upper-level problem considers the lower-level problems by their optimal value functions.\nPrimal decomposition has been very successful in solving large-scale problems from chemical engineering [Zavala2008, Yoshio2021] and some of the largest Quadratic Programs (QPs) and Nonlinear Programs (NLPs) from power systems [Tu2021, Petra2021, Curtis2021]. Moreover, primal decomposition allows to use specialized, domain-specific solvers to solve the subproblems and the master problem efficiently [DeMiguel2006].\nIn this work, we propose two primal decomposition schemes for solving large-scale strongly convex QPs, with global convergence guarantees.\nBoth methods rely respectively on augmented Lagrangians and exact -penalties for ensuring feasibility in the subproblems.\nSimilar -penalty based approaches have been proposed in previous works [DeMiguel2006, Tu2021].\nIn contrast to [Tu2021], our work is not restricted to a specific application and can be used on any strongly convex hierarchically structured QP.\nThe augmented-Lagrangian framework is new to the best of our knowledge.\nWe show that the augmented Lagrangian formulation exhibits improved performance compared to the 1 formulation. Moreover, we demonstrate that the algorithms are faster than off-the-shelf interior-point solvers.\nWe benchmark our algorithms against a distributed ADMM and the nonlinear solver Ipopt.\nAs benchmarks, we consider the operation of HVAC systems in a city district with a variable number of buildings and with up to decision variables and inequality constraints and two Optimal Power Flow problems with up to 7,852 buses."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Problem Formulation",
15
+ "text": "###figure_1### Many infrastructure network problems can be formulated as strongly convex QPs over a set of subsystems ,\nHere, the global decision vector is composed of local decision variables , where each belongs to one subsystem .\nThe decision variables are \u201cglobal\u201d in the sense that they belong to the interconnecting infrastructure network, described by the constraints (1d ###reference_4###).\nEach coefficient matrix/vector in the objective (1a ###reference_1###) and the constraints (1b ###reference_2###), (1c ###reference_3###) belongs to one .\nObserve that problem (1 ###reference_###) is defined over a star graph, where and constraint (1d ###reference_4###) correspond to the root vertex, and and constraints (1b ###reference_2###), (1c ###reference_3###) belong/couple the root vertex to all leafs (Figure 1 ###reference_###).\nThis structure is common in many infrastructure networks such as electricity grids, gas networks or district heating systems, which are composed of a network as the root and complex subsystems such as households, distribution grids or industrial facilities as leafs [Erseghe2014, Kim2000, Shin2021, Huang2017, Cao2019, Coulbeck1988].\nThese applications often require a high degree of feasibility in a small number of iterations without full model exchange. The main objective of this work is to develop primal decomposition schemes able to achieve that goal."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Primal Decomposition Schemes",
21
+ "text": "In contrast to duality-based techniques such as ADMM or dual decomposition,\nprimal decomposition decomposes entirely in the primal space, i.e. no dual variables are updated in the solution process.\nThe main idea here is to replace the subproblems in (1 ###reference_###) by their optimal value functions.\nSpecifically, one reformulates (1 ###reference_###) as\nwhere for all , the value function is defined as\nThe key idea is to apply standard algorithms for solving (2 ###reference_###)\nby optimizing only with respect to the coupling variables .\nDoing so can lead to enhanced robustness, as the complexity of the subproblems is not exposed to the algorithm solving (2 ###reference_###).\nAlgorithms for solving (2 ###reference_###) typically require first-order and possibly second-order derivatives of all .\nSince all are non-smooth because of the inequality constraints, one typically relies on smooth reformulations.\nInspired by interior-point methods [DeMiguel2006], we introduce log-barrier functions and\nslack variables , which approximate (3 ###reference_###) by\nwhere is a barrier parameter, , and the is evaluated component-wise. Note that , and that is smooth111Under standard regularity assumptions [DeMiguel2008, A1-C1].. A basic primal decomposition strategy with smoothing is summarized in Algorithm 1 ###reference_###."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Computing sensitivities",
27
+ "text": "Next, we review how to compute and under standard regularity assumptions based on the implicit function theorem [DeMiguel2008].\nReformulate (4 ###reference_###) by\nwhere is defined by (4a ###reference_1###), and and are defined by (4b ###reference_2###), (4c ###reference_3###).\nDefine the Lagrangian to (9 ###reference_###),\nAssume that (9 ###reference_###) is feasible for a given and that the regularity conditions from [DeMiguel2008, Ass. 1-4] hold. Then, the KKT conditions to (9 ###reference_###) form an implicit function in form of , where the superscript indicates a KKT stationary point.\nThus, by the implicit function theorem, there exist a neighborhood around for which there exists functions such that . Hence, we can rewrite (9 ###reference_###) as since is feasible.\nApplying the total derivative and the chain rule yields\nBy the KKT conditions, we have that \nand thus\nAgain by the total derivative, the Hessian can be computed by\nIt remains to derive an expression for .\nThe KKT conditions of (9 ###reference_###) read\nwhere . By the total differential and the chain rule we have . Hence, we can compute the Jacobian by solving the system of linear equations\nObserve that (12 ###reference_###) is a system of linear equations with multiple right-hand sides.\nIn summary, we can compute locally for each by combining (11 ###reference_###) and (12 ###reference_###). The corresponding formulas for the gradient and the Hessian of and from (3 ###reference_###) and (3 ###reference_###), i.e. of the AL relaxation and the relaxation (9 ###reference_###) are given in Appendix A ###reference_###."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "A Method for Solving the Master Problem",
33
+ "text": "An important question is how to solve the master problem (2 ###reference_###) for different variants of . In general, this can be done by any sensitivity-based NLP solver. We proceed by showing how to obtain a simple globalized version of Algorithm 1 ###reference_### based on a line-search scheme; here, the idea is to show global convergence for the relaxed problem (2 ###reference_###) with for fixed penalty and barrier parameters. This leads to converge of a solution to the original problem (1 ###reference_###) by standard results from penalty and barrier methods [Nocedal2006, Thms. 17.1, 17.6].\nDefine the objective of (2 ###reference_###), , as a global merit function, where . The basic idea is to employ a Sequential Quadratic Programming (SQP) scheme, where we ensure a sufficient decrease in at each step via the Armijo condition. The overall algorithm is summarized in Algorithm 4 ###reference_###. Similar to the general primal decomposition scheme from Algorithm 1 ###reference_###, the master problem solver evaluates the sensitivities in step (i), in order to construct a quadratic approximation of (2 ###reference_###) in step (ii). Solving this approximation yields a search direction .\nThe stepsize is updated with\na backtracking line-search with the Armijo condition as termination criterion."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Implementation Aspects",
39
+ "text": "The evaluation of the sensitivities of requires solving local optimization problems (3 ###reference_###) or (3 ###reference_###) for fixed .\nThis can be done using specialized and optimized interior-point solvers, if they allow termination once a certain barrier is reached.\nMoreover, interior-point solvers factorize the KKT matrices (cf. (24 ###reference_###), (26 ###reference_6###)) at each inner iteration and these factorizations can be re-used for Hessian computation via (12 ###reference_###).\nHere we provide two variants: our own interior-point QP solver based on standard techniques for stepsize selection and barrier parameter decrease [Nocedal2006, Chap. 16.6] and the option to use third-party solvers such as Ipopt [Wachter2006].\nIn early iterations, it is typically not necessary to solve the local problems to a high precision, since the barrier parameter is still large and the penalty parameters are still small.\nHence, we solve the subproblems to an accuracy measured in the violation of the optimality conditions and terminate if or . This is inspired by the termination of inexact interior-point methods [Byrd1998]. Warm-starting the local solves with the solution of the previous iteration reduces computation time significantly."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "Numerical Case Studies",
45
+ "text": "We consider an optimal control problem for a city district with a scalable number of commercial buildings connected via a electricity grid with limited capacity. The building data is from [Rawlings2018]. We neglect the waterside HVAC system and assume that the buildings are equipped with heat pumps with a constant coefficient of performance.\n###figure_2###"
46
+ },
47
+ {
48
+ "section_id": "7.1",
49
+ "parent_section_id": "7",
50
+ "section_name": "District HVAC",
51
+ "text": "The evolution of the temperature of the th zone in the th building reads\nwhere at time step , is the temperature of zone and the ambient temperature, is the thermal capacity, and are heat transfer coefficients with the ambient and between two zones. Moreover, are the controllable/uncontrollable heat influxes from the heat pump and from sources of disturbance such as solar irradiation and occupancy.\nEq. (14 ###reference_###) can be written in compact form as\nwhere and .\nThis yields a state-space model\nStacking the above over time steps yields\nwhere , , , and are the initial temperatures.\nDefine the total energy consumption of building at time step by , and .\nThen, the above is equivalent to\nThe grid coupling between all subsystems induces an upper-bounded energy supply\nwriting as a global constraint:\n for all times .\nMoreover, we have local comfort constraints ."
52
+ },
53
+ {
54
+ "section_id": "7.2",
55
+ "parent_section_id": "7",
56
+ "section_name": "Optimal Power Flow",
57
+ "text": "Optimal Power Flow (OPF) aims at minimizing the cost of power generation in power systems while satisfying all grid and generator constraints.\nA standard OPF formulation reads\nwhere is the active power generation of generators, the diagonal matrix and vector are composed of generator-specific cost coefficients, are active power demands and are voltage angles at each bus.\nIn (18b ###reference_.2###), is the bus susceptance matrix and is the branch susceptance matrix, which map the voltage angles to power injections and power flows over transmission lines respectively, cf. [Molzahn2019].\nThe matrix maps generator injections to connecting buses.\nThe constraint (18c ###reference_.3###) expresses generation and line flow limits.\nPower grids are typically structured in hierarchy levels reaching from extra-high voltage to low-voltage grids.\nAs a numerical test case, we consider the IEEE 300-bus test system to which we connect a varying amount of 118-bus sub-grids (with data from the MATPOWER database [Zimmerman2011]). We add a small regularization term of on the main diagonal of each to make the problem strongly convex in order to meet the conditions of Assumption 1 ###reference_1###.\nTo obtain a problem in form of (1 ###reference_###), we introduce decision variables for the master grid , where is an auxiliary variable corresponding to the active power at interconnecting nodes with the lower-level network.\nFor the th lower-level subproblem we get\nHere, are a selection matrices, which couple power demand/generation at interconnecting nodes between subsystems.\nThe master problem then reads\nwhere is a reference (slack) constraint in order to obtain an unique angle solutions , and is a matrix mapping the coupling variables to coupling buses.\n###table_1###"
58
+ },
59
+ {
60
+ "section_id": "7.3",
61
+ "parent_section_id": "7",
62
+ "section_name": "Numerical Results",
63
+ "text": "We benchmark our algorithms against ADMM (as one of the most popular algorithms for decomposition) and against Ipopt v3.14.4 (as one of the most prominent centralized NLP solvers).555The ADMM-based QP solver OSQP solver did not converge for the problems presented here.\nThe particular variant of ADMM can be found in an extended version of this work [Engelmann2022b].\nWe rely on OSQP v0.6.2 [Stellato2020] for solving subproblems and the coordination problem in ADMM.\nIn primal decomposition, we rely on our own interior-point solver for the subproblems and on Algorithm 4 ###reference_### for coordination, where we solve (13 ###reference_###) via Ipopt.666Note that the proposed framework is flexible with respect to the interior point solvers used in the subproblems as long as one can access the corresponding sensitivity matrices. \nWe perform all simulations on a shared-memory virtual machine with 30 cores and 100GiB memory.\nThe underlying hardware is exclusively used for the case studies.\nAll algorithms are parallelized via Julia multi-threading\u2014thus all subproblems are solved on multiple cores in parallel.\nWe compare the numerical performance of all algorithms on OCP (17 ###reference_###) for buildings, and on the OPF problem (18 ###reference_###) with sub-grids. Table 1 ###reference_### shows the corresponding number of local/global decision variables /, the number of local equality/inequality constraints /, and the number of global equality/inequality constraints /.\nWe employ ADMM from [Engelmann2022b] with penalty parameters .\n###figure_3### ###figure_4### ###figure_5### ###figure_6### 4(a) ###reference_sf1### illustrates the numerical performance of both primal decomposition variants and ADMM for the HVAC problems. Figure 5 ###reference_### shows their performance for the OPF problems.\n4(b) ###reference_sf2### shows the AL formulation only, since the 1 formulation runs out of memory for this problem.\nThe constraint violations for the equality constraints (1b ###reference_2###), ,\nfor the inequality constraints (1c ###reference_3###),\n, and the value of the cost function from (1a ###reference_1###) are displayed, where the x-axis shows the iteration count.\nOne can observe that the primal decomposition schemes achieve a high degree of feasibility in less than 10 iterations for all cases.\nMoreover, the optimality gap is below in less than 10 iterations for both primal decomposition variants and for all , where is computed via Ipopt.\nFor ADMM, infeasibility and the optimality gap stay large independently of the choice of .\nThe reason for the poor scaling of the 1-formulation is two-fold: First, the relaxation (3 ###reference_###) introduces additional slack variables and inequality constraints.\nHence, the KKT system in the subproblems defined via (26 ###reference_6###) has a larger size\nthan the KKT system we get with the AL formulation (24 ###reference_###).\nMoreover, the additional inequality constraints potentially lead to smaller stepsizes due to the fraction-to-boundary rule [Nocedal2006, Eq 19.9]. Hence, more iterations in the subproblems are required compared to the AL formulation."
64
+ },
65
+ {
66
+ "section_id": "8",
67
+ "parent_section_id": null,
68
+ "section_name": "Discussion of Algorithmic Properties",
69
+ "text": "Next, we discuss algorithm properties in view of the desirable properties from Section 1 ###reference_###."
70
+ },
71
+ {
72
+ "section_id": "9",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusion and Outlook",
75
+ "text": "We have presented two primal decomposition schemes to solve large-scale QPs for the operation of infrastructure networks.\nThe developed methods are proven to converge globally to the optimal solution.\nNumerical experiments have demonstrated their potential for solving large-scale QPs in a small number of iterations to a high degree of feasibility and optimality, which distinguishes them from classical distributed methods such as ADMM.\nMoreover, we have shown that primal decomposition based on augmented Lagrangians has numerical benefits compared to the classical 1-formulation.\nFuture work will further improve implementation aspects of the developed primal decompositions schemes. Sparse backsolves or quasi-Newton Hessian approximations have the potential to greatly accelerate Hessian computation."
76
+ }
77
+ ],
78
+ "appendix": [
79
+ {
80
+ "section_id": "Appendix 1",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix A Sensitivities for Augmented Lagrangians",
83
+ "text": "Observe that for computing and in (10 ###reference_###) and (11 ###reference_###), the partial derivatives of the implicit function and are required.\nNext, we derive these quantities for the two relaxed local problems (3 ###reference_###) and (3 ###reference_###).\nFor (3 ###reference_###), the Lagrangian (omitting arguments) reads\nHence, the local KKT conditions read\nwhere .\nMoreover,\nwhere .\nMoreover, by (10 ###reference_###),\n\nFurthermore, by (11 ###reference_###),\nwhere is computed by the system of linear equations"
84
+ },
85
+ {
86
+ "section_id": "Appendix x1",
87
+ "parent_section_id": null,
88
+ "section_name": "Sensitivities for the Formulation",
89
+ "text": "The Lagrangian to (3 ###reference_###) reads\nHence, the KKT conditions require\nwhere .\nThus,\nwhere , , and .\nMoreover,\nFurthermore, , , and .\nThus, by (11 ###reference_###),"
90
+ },
91
+ {
92
+ "section_id": "Appendix 2",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix B Proof of Lemma\u00a01",
95
+ "text": "First, we will show that for a regular, symmetric , with .\nConsider a re-ordered eigendecomposition and partition , such that is a nullspace-basis of , i.e. .\nHence, we have since .\nThus, .\nAgain, since , by expansion, .\nProof of a): By (22 ###reference_###), we need for computing , where is defined by (23 ###reference_###).\nDefine , and\n\nConsider (21 ###reference_###) and parametrize , where is a nullspace matrix to , i.e., the columns of form an orthogonal basis of the nullspace of and is an auxiliary matrix.\nUsing the above parametrization in (23 ###reference_###) and multiplying with yields\n by .\nSince and Assumption 1 holds, we have and thus is invertible by full rank of .\nHence, by (22 ###reference_###) and the above derivation, \nNotice that is a diagonal matrix with ones and zeros.\nHence, since is positive definite, it suffices to show that for the worst case, i.e. (no constraints).\nThus, by the definition of , by Assumption 1 ###reference_1### a) and the Schur-complement Lemma [Boyd2004, A.14].\nProof of b): By (28 ###reference_8###), we need to show that , which can be computed by the system of linear equations (26 ###reference_6###), (27 ###reference_7###).\nDefine and\n\nBy Assumption 1 ###reference_1###, , and , we have that .\nHence, .\nThus, . Since and by full rank of from Assumption 1 ###reference_1###, and thus .\nSince , all leading principle minors of this matrix must be positive definite by Sylvester\u2019s criterion [Horn2013, Col 7.1.5].\nBy variable reordering, the assertion follows."
96
+ },
97
+ {
98
+ "section_id": "Appendix 3",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix C Solution of (1) via ADMM",
101
+ "text": "We derive a distributed ADMM version for (1 ###reference_###) as a baseline for numerical comparison.\nConsider (1 ###reference_###), introduce auxiliary variables and consensus constraints .\nThis yields\nThe augmented Lagrangian with respect to reads\nwhere are defined by (29a ###reference_.1###)-(29c ###reference_.3###) and is the indicator function for (29d ###reference_.4###).\nMinimizing w.r.t. for fixed yields for all\nMinimising w.r.t. for fixed yields\nFinally, the Lagrange multiplier update reads\nThe update rules (C ###reference_###)-(32 ###reference_###) define the ADMM iterations.\nNote that (C ###reference_###) and (32 ###reference_###) can be executed locally for all , whereas (31 ###reference_###) defines the global coordination step."
102
+ }
103
+ ],
104
+ "tables": {
105
+ "1": {
106
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S7.T1.9.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S7.T1.10.2\" style=\"font-size:90%;\">Number of decision variables and constraints for the HVAC and OPF problems.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S7.T1.7\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T1.7.7\">\n<td class=\"ltx_td ltx_border_t\" id=\"S7.T1.7.7.8\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.5.5.5\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.6.6.6\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.7.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.7.8.1\">\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.8.1.1\" rowspan=\"3\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_top\" id=\"S7.T1.7.8.1.1.1\" style=\"width:5.7pt;\">\n<span class=\"ltx_p\" id=\"S7.T1.7.8.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S7.T1.7.8.1.1.1.1.1\" style=\"width:6.8pt;height:28.4pt;vertical-align:-10.8pt;\"><span class=\"ltx_transformed_inner\" style=\"width:28.3pt;transform:translate(-10.75pt,0pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S7.T1.7.8.1.1.1.1.1.1\">HVAC</span>\n</span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.8.1.2\">30</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.8.1.3\">28,200</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.8.1.4\">690</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.8.1.5\">15,090</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.8.1.6\">28,800</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.8.1.7\">0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.8.1.8\">1,403</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.7.9.2\">\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.9.2.1\">180</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.9.2.2\">169,200</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.9.2.3\">4,140</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.9.2.4\">90,540</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.9.2.5\">172,800</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.9.2.6\">0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.9.2.7\">8,303</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.7.10.3\">\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.10.3.1\">300</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.10.3.2\">282,000</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.10.3.3\">6,900</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.10.3.4\">150,900</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.10.3.5\">288,000</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.10.3.6\">0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S7.T1.7.10.3.7\">13,823</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.7.11.4\">\n<td class=\"ltx_td ltx_align_right ltx_border_b ltx_border_t\" id=\"S7.T1.7.11.4.1\" rowspan=\"2\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_top\" id=\"S7.T1.7.11.4.1.1\" style=\"width:5.7pt;\">\n<span class=\"ltx_p\" id=\"S7.T1.7.11.4.1.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S7.T1.7.11.4.1.1.1.1\" style=\"width:6.8pt;height:21.1pt;vertical-align:-7.1pt;\"><span class=\"ltx_transformed_inner\" style=\"width:21.1pt;transform:translate(-7.14pt,0pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S7.T1.7.11.4.1.1.1.1.1\">OPF</span>\n</span></span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.11.4.2\">29</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.11.4.3\">11,191</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.11.4.4\">809</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.11.4.5\">9,557</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.11.4.6\">14,880</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.11.4.7\">712</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S7.T1.7.11.4.8\">960</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.7.12.5\">\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S7.T1.7.12.5.1\">64</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S7.T1.7.12.5.2\">23,756</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S7.T1.7.12.5.3\">844</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S7.T1.7.12.5.4\">20,232</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S7.T1.7.12.5.5\">31,680</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S7.T1.7.12.5.6\">712</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S7.T1.7.12.5.7\">960</td>\n</tr>\n</tbody>\n</table>\n</figure>",
107
+ "capture": "Table 1: Number of decision variables and constraints for the HVAC and OPF problems."
108
+ },
109
+ "2": {
110
+ "table_html": "<figure class=\"ltx_table\" id=\"S8.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S8.T2.33.3.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S8.T2.4.2\" style=\"font-size:90%;\">Timing and number of iterations for the HVAC and the OPF problem with buildings and sub-grids, 30 cores.</span></figcaption><div class=\"ltx_flex_figure\">\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S8.T2.30\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S8.T2.30.27.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.30.27.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.30.27.1.2\"></th>\n<td class=\"ltx_td ltx_border_t\" id=\"S8.T2.30.27.1.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S8.T2.30.27.1.4\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S8.T2.30.27.1.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.30.27.1.6\"><span class=\"ltx_text ltx_font_typewriter\" id=\"S8.T2.30.27.1.6.1\">Ipopt</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.30.27.1.7\">ADMM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.30.27.1.8\">ADMM</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.8.4\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S8.T2.8.4.5\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S8.T2.5.1.1\"></th>\n<td class=\"ltx_td\" id=\"S8.T2.8.4.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.8.4.7\">AL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.6.2.2\">\n1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.8.4.8\">par. LA</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.7.3.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.8.4.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.10.6\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.10.6.3\" rowspan=\"3\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_top\" id=\"S8.T2.10.6.3.1\" style=\"width:5.7pt;\">\n<span class=\"ltx_p\" id=\"S8.T2.10.6.3.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S8.T2.10.6.3.1.1.1\" style=\"width:6.8pt;height:28.4pt;vertical-align:-10.8pt;\"><span class=\"ltx_transformed_inner\" style=\"width:28.3pt;transform:translate(-10.75pt,0pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S8.T2.10.6.3.1.1.1.1\">HVAC</span>\n</span></span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.10.6.4\">300</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.10.6.5\">t[s]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.10.6.6\">431.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.10.6.7\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.10.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.10.6.8.1\">386.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.9.5.1\">892.8<sup class=\"ltx_sup\" id=\"S8.T2.9.5.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.10.6.2\">1,122.2<sup class=\"ltx_sup\" id=\"S8.T2.10.6.2.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.12.8\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S8.T2.12.8.3\">180</th>\n<td class=\"ltx_td\" id=\"S8.T2.12.8.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.12.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.12.8.5.1\">195.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.12.8.6\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.12.8.7\">218.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.11.7.1\">510.3<sup class=\"ltx_sup\" id=\"S8.T2.11.7.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.12.8.2\">1,322.2<sup class=\"ltx_sup\" id=\"S8.T2.12.8.2.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.14.10\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S8.T2.14.10.3\">30</th>\n<td class=\"ltx_td\" id=\"S8.T2.14.10.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.14.10.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.14.10.5.1\">18.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.14.10.6\">270.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.14.10.7\">25.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.13.9.1\">72.8<sup class=\"ltx_sup\" id=\"S8.T2.13.9.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.14.10.2\">85.53<sup class=\"ltx_sup\" id=\"S8.T2.14.10.2.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.16.12\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.16.12.3\" rowspan=\"2\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_top\" id=\"S8.T2.16.12.3.1\" style=\"width:5.7pt;\">\n<span class=\"ltx_p\" id=\"S8.T2.16.12.3.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S8.T2.16.12.3.1.1.1\" style=\"width:6.8pt;height:21.1pt;vertical-align:-7.1pt;\"><span class=\"ltx_transformed_inner\" style=\"width:21.1pt;transform:translate(-7.14pt,0pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S8.T2.16.12.3.1.1.1.1\">OPF</span>\n</span></span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.16.12.4\">64</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.16.12.5\">t[s]</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.16.12.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.16.12.6.1\">2.64</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.16.12.7\">283.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.16.12.8\">4.64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.15.11.1\">70.52<sup class=\"ltx_sup\" id=\"S8.T2.15.11.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.16.12.2\">111.29<sup class=\"ltx_sup\" id=\"S8.T2.16.12.2.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.17.13\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S8.T2.17.13.2\">29</th>\n<td class=\"ltx_td\" id=\"S8.T2.17.13.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.17.13.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.17.13.4.1\">1.83</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.17.13.5\">95.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.17.13.6\">2.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.17.13.7\">13.54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.17.13.1\">61.58<sup class=\"ltx_sup\" id=\"S8.T2.17.13.1.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.19.15\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.19.15.3\" rowspan=\"3\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_top\" id=\"S8.T2.19.15.3.1\" style=\"width:5.7pt;\">\n<span class=\"ltx_p\" id=\"S8.T2.19.15.3.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S8.T2.19.15.3.1.1.1\" style=\"width:6.8pt;height:28.4pt;vertical-align:-10.8pt;\"><span class=\"ltx_transformed_inner\" style=\"width:28.3pt;transform:translate(-10.75pt,0pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S8.T2.19.15.3.1.1.1.1\">HVAC</span>\n</span></span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.19.15.4\">300</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.19.15.5\">iter.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.19.15.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.19.15.6.1\">13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.19.15.7\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.19.15.8\">145</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.18.14.1\">5,000<sup class=\"ltx_sup\" id=\"S8.T2.18.14.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.19.15.2\">5,000<sup class=\"ltx_sup\" id=\"S8.T2.19.15.2.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.21.17\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S8.T2.21.17.3\">180</th>\n<td class=\"ltx_td\" id=\"S8.T2.21.17.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.21.17.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.21.17.5.1\">12</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.21.17.6\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.21.17.7\">141</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.20.16.1\">5,000<sup class=\"ltx_sup\" id=\"S8.T2.20.16.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.21.17.2\">5,000<sup class=\"ltx_sup\" id=\"S8.T2.21.17.2.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.23.19\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S8.T2.23.19.3\">30</th>\n<td class=\"ltx_td\" id=\"S8.T2.23.19.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.23.19.5\">13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.23.19.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.23.19.6.1\">12</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.23.19.7\">104</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.22.18.1\">5,000<sup class=\"ltx_sup\" id=\"S8.T2.22.18.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.23.19.2\">5,000<sup class=\"ltx_sup\" id=\"S8.T2.23.19.2.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.25.21\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.25.21.3\" rowspan=\"2\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_top\" id=\"S8.T2.25.21.3.1\" style=\"width:5.7pt;\">\n<span class=\"ltx_p\" id=\"S8.T2.25.21.3.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S8.T2.25.21.3.1.1.1\" style=\"width:6.8pt;height:21.1pt;vertical-align:-7.1pt;\"><span class=\"ltx_transformed_inner\" style=\"width:21.1pt;transform:translate(-7.14pt,0pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S8.T2.25.21.3.1.1.1.1\">OPF</span>\n</span></span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.25.21.4\">64</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.25.21.5\">iter.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.25.21.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.25.21.6.1\">9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.25.21.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.25.21.7.1\">9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.25.21.8\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.24.20.1\">3,199<sup class=\"ltx_sup\" id=\"S8.T2.24.20.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.25.21.2\">5,000<sup class=\"ltx_sup\" id=\"S8.T2.25.21.2.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.26.22\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S8.T2.26.22.2\">29</th>\n<td class=\"ltx_td\" id=\"S8.T2.26.22.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.26.22.4\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.26.22.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S8.T2.26.22.5.1\">7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.26.22.6\">15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.26.22.7\">1,133</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T2.26.22.1\">5,000<sup class=\"ltx_sup\" id=\"S8.T2.26.22.1.1\">\u2217</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.28.24\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.28.24.3\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S8.T2.28.24.4\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.28.24.5\">term.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S8.T2.27.23.1\">rel. opt. \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T2.28.24.6\">optimal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S8.T2.28.24.2\">rel. opt. \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T2.30.26\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_b\" id=\"S8.T2.30.26.3\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_b\" id=\"S8.T2.30.26.4\"></th>\n<td class=\"ltx_td ltx_border_b\" id=\"S8.T2.30.26.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" colspan=\"2\" id=\"S8.T2.29.25.1\">infeas. \n</td>\n<td class=\"ltx_td ltx_border_b\" id=\"S8.T2.30.26.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" colspan=\"2\" id=\"S8.T2.30.26.2\">infeas. \n</td>\n</tr>\n</tbody>\n</table>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<p class=\"ltx_p ltx_figure_panel ltx_align_center\" id=\"S8.T2.31\"><sup class=\"ltx_sup\" id=\"S8.T2.31.1\"><span class=\"ltx_text\" id=\"S8.T2.31.1.1\" style=\"font-size:83%;\">\u2217</span></sup><span class=\"ltx_text\" id=\"S8.T2.31.2\" style=\"font-size:83%;\">terminated because max. iterations reached.</span></p>\n</div>\n</div>\n</figure>",
111
+ "capture": "Table 2: Timing and number of iterations for the HVAC and the OPF problem with buildings and sub-grids, 30 cores."
112
+ },
113
+ "3": {
114
+ "table_html": "<figure class=\"ltx_table\" id=\"S8.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S8.T3.3.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S8.T3.4.2\" style=\"font-size:90%;\">Internal timing (%) for the AL formulation and the HVAC problem, 30 cores.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S8.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S8.T3.1.1\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S8.T3.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S8.T3.1.1.2\">sensitivity eval.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S8.T3.1.1.3\">local sol.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S8.T3.1.1.4\">coord.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S8.T3.1.1.5\">line search</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S8.T3.1.1.6\">other</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S8.T3.1.2.1\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T3.1.2.1.1\">300</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T3.1.2.1.2\">68.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T3.1.2.1.3\">6.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T3.1.2.1.4\">6.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T3.1.2.1.5\">17.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S8.T3.1.2.1.6\">1.30</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T3.1.3.2\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S8.T3.1.3.2.1\">180</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T3.1.3.2.2\">41.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T3.1.3.2.3\">19.86</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T3.1.3.2.4\">9.02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T3.1.3.2.5\">26.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S8.T3.1.3.2.6\">2.65</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T3.1.4.3\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_b\" id=\"S8.T3.1.4.3.1\">30</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S8.T3.1.4.3.2\">4.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S8.T3.1.4.3.3\">6.69</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S8.T3.1.4.3.4\">9.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S8.T3.1.4.3.5\">79.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S8.T3.1.4.3.6\">0.30</td>\n</tr>\n</tbody>\n</table>\n</figure>",
115
+ "capture": "Table 3: Internal timing (%) for the AL formulation and the HVAC problem, 30 cores."
116
+ },
117
+ "4": {
118
+ "table_html": "<figure class=\"ltx_table\" id=\"S8.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S8.T4.10.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S8.T4.11.2\" style=\"font-size:90%;\">Properties of ADMM and Primal Decomposition.</span></figcaption><div class=\"ltx_flex_figure\">\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S8.T4.6\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S8.T4.6.7.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S8.T4.6.7.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S8.T4.6.7.1.2\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S8.T4.6.7.1.3\">ADMM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S8.T4.6.7.1.4\">Primal Decomp.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T4.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S8.T4.2.2.3\">Communication</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T4.2.2.4\">forward</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S8.T4.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S8.T4.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S8.T4.4.4.3\">(#floats step)</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S8.T4.4.4.4\">backward</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S8.T4.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S8.T4.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T4.6.8.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S8.T4.6.8.2.1\">Computation</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S8.T4.6.8.2.2\">local</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S8.T4.6.8.2.3\">Convex QP</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S8.T4.6.8.2.4\">NLP</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T4.6.9.3\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S8.T4.6.9.3.1\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S8.T4.6.9.3.2\">global</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S8.T4.6.9.3.3\">Convex QP</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S8.T4.6.9.3.4\">NLP + lin. equations</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T4.6.10.4\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S8.T4.6.10.4.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S8.T4.6.10.4.2\"></th>\n<td class=\"ltx_td\" id=\"S8.T4.6.10.4.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S8.T4.6.10.4.4\">with multiple rhs.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T4.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S8.T4.5.5.2\">Conv. rate (max.)</th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S8.T4.5.5.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S8.T4.5.5.4\">Linear</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S8.T4.5.5.1\">(Superlinear)<sup class=\"ltx_sup\" id=\"S8.T4.5.5.1.1\">#</sup>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S8.T4.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S8.T4.6.6.2\">Decentralization</th>\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S8.T4.6.6.3\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_t\" id=\"S8.T4.6.6.1\">Decentralized<sup class=\"ltx_sup\" id=\"S8.T4.6.6.1.1\">\u2217</sup>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_t\" id=\"S8.T4.6.6.4\">Distributed</td>\n</tr>\n</tbody>\n</table>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<p class=\"ltx_p ltx_figure_panel ltx_align_center\" id=\"S8.T4.8\"><sup class=\"ltx_sup\" id=\"S8.T4.8.2\"><span class=\"ltx_text\" id=\"S8.T4.8.2.1\" style=\"font-size:83%;\">\u2217</span></sup><span class=\"ltx_text\" id=\"S8.T4.8.1\" style=\"font-size:83%;\">in the sense that decentralization of ADMM is straight-forward. \n<br class=\"ltx_break\"/><sup class=\"ltx_sup\" id=\"S8.T4.8.1.1\"><span class=\"ltx_text\" id=\"S8.T4.8.1.1.1\" style=\"font-size:83%;\">#</span></sup><span class=\"ltx_text\" id=\"S8.T4.8.1.2\" style=\"font-size:83%;\">to a solution of the barrier problem (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2212.11571v2#S3.E5\" title=\"In Dealing With Infeasibility \u2023 3 Primal Decomposition Schemes \u2023 Scalable Primal Decomposition Schemes for Large-Scale Infrastructure Networks\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>)</span></span></p>\n</div>\n</div>\n</figure>",
119
+ "capture": "Table 4: Properties of ADMM and Primal Decomposition."
120
+ }
121
+ },
122
+ "image_paths": {
123
+ "1": {
124
+ "figure_path": "2212.11571v2_figure_1.png",
125
+ "caption": "Figure 1: Star graph of problem (1).",
126
+ "url": "http://arxiv.org/html/2212.11571v2/x1.png"
127
+ },
128
+ "2": {
129
+ "figure_path": "2212.11571v2_figure_2.png",
130
+ "caption": "Figure 2: Communication in Algorithms 2 and 3.",
131
+ "url": "http://arxiv.org/html/2212.11571v2/x2.png"
132
+ },
133
+ "3": {
134
+ "figure_path": "2212.11571v2_figure_3.png",
135
+ "caption": "Figure 3: Buildings connected via network with limited capacity.",
136
+ "url": "http://arxiv.org/html/2212.11571v2/x3.png"
137
+ },
138
+ "4(a)": {
139
+ "figure_path": "2212.11571v2_figure_4(a).png",
140
+ "caption": "(a) |\ud835\udcae|=30\ud835\udcae30|\\mathcal{S}|=30| caligraphic_S | = 30 buildings.\nFigure 4: Convergence for three HVAC problems.",
141
+ "url": "http://arxiv.org/html/2212.11571v2/x4.png"
142
+ },
143
+ "4(b)": {
144
+ "figure_path": "2212.11571v2_figure_4(b).png",
145
+ "caption": "(b) |\ud835\udcae|=300\ud835\udcae300|\\mathcal{S}|=300| caligraphic_S | = 300 buildings.\nFigure 4: Convergence for three HVAC problems.",
146
+ "url": "http://arxiv.org/html/2212.11571v2/x5.png"
147
+ },
148
+ "5(a)": {
149
+ "figure_path": "2212.11571v2_figure_5(a).png",
150
+ "caption": "(a) |\ud835\udcae|=29\ud835\udcae29|\\mathcal{S}|=29| caligraphic_S | = 29 sub-grids.\nFigure 5: Convergence for two OPF problems.",
151
+ "url": "http://arxiv.org/html/2212.11571v2/x6.png"
152
+ },
153
+ "5(b)": {
154
+ "figure_path": "2212.11571v2_figure_5(b).png",
155
+ "caption": "(b) |\ud835\udcae|=64\ud835\udcae64|\\mathcal{S}|=64| caligraphic_S | = 64 sub-grids.\nFigure 5: Convergence for two OPF problems.",
156
+ "url": "http://arxiv.org/html/2212.11571v2/x7.png"
157
+ },
158
+ "6": {
159
+ "figure_path": "2212.11571v2_figure_6.png",
160
+ "caption": "Figure 6: Sparsity patterns of \u2207y\u2062y2\u03d52\u2062(y)superscriptsubscript\u2207\ud835\udc66\ud835\udc662subscriptitalic-\u03d52\ud835\udc66\\nabla_{yy}^{2}\\phi_{2}(y)\u2207 start_POSTSUBSCRIPT italic_y italic_y end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_\u03d5 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_y ) for the HVAC problem with |\ud835\udcae|=4\ud835\udcae4|\\mathcal{S}|=4| caligraphic_S | = 4 (left) and the OPF problem with |\ud835\udcae|=29\ud835\udcae29|\\mathcal{S}|=29| caligraphic_S | = 29 (right).",
161
+ "url": "http://arxiv.org/html/2212.11571v2/x8.png"
162
+ },
163
+ "7(a)": {
164
+ "figure_path": "2212.11571v2_figure_7(a).png",
165
+ "caption": "(a) HVAC problem with |\ud835\udcae|=30\ud835\udcae30|\\mathcal{S}|=30| caligraphic_S | = 30.\nFigure 7: Communication (#floats) for one subsystem.",
166
+ "url": "http://arxiv.org/html/2212.11571v2/x9.png"
167
+ },
168
+ "7(b)": {
169
+ "figure_path": "2212.11571v2_figure_7(b).png",
170
+ "caption": "(b) OPF problem with |\ud835\udcae|=29\ud835\udcae29|\\mathcal{S}|=29| caligraphic_S | = 29.\nFigure 7: Communication (#floats) for one subsystem.",
171
+ "url": "http://arxiv.org/html/2212.11571v2/x10.png"
172
+ }
173
+ },
174
+ "validation": true,
175
+ "references": [],
176
+ "url": "http://arxiv.org/html/2212.11571v2"
177
+ }
20241127/2305.19353v5.json ADDED
@@ -0,0 +1,516 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Bearing-Constrained Leader-Follower Formation of Single-Integrators with Disturbance Rejection: Adaptive Variable-Structure Approaches",
3
+ "abstract": "This paper studies the problem of stabilizing a leader-follower formation specified by a set of bearing constraints while disturbed by some unknown uniformly bounded disturbances. A set of leaders are positioned at their desired positions while each follower is modeled by a single integrator with an additive time-varying disturbance. Adaptive variable-structure control laws using displacements or only bearing vectors are applied to stabilize the desired formation. Thanks to the adaptive mechanisms, the proposed control laws require neither information on the bearing Laplacian nor the directions and upper bounds of the disturbances.\nIt is further proved that when the leaders are moving with the same bounded uniformly continuous velocity, the moving target formation can be achieved under the proposed control laws. Simulation results are provided to support the stability analysis.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Recent years have been a booming research period for bearing-based formation control [34 ###reference_b34###, 26 ###reference_b26###], a research topic inspired from the observation that animals can self-localize, navigate, and perform formation-type collective behaviors using their vision power. Research from diverse fields suggests that that fairly simple vision-based guidance rules in animals can unfold sophisticated formation-type phenomena [15 ###reference_b15###]. From an engineering perspective, there have been ongoing attempts to understand and realize these displayed formations. Considered as the eye of an autonomous agent (UAV, AGV), the camera provides the bearing vectors (directional information) from the agent to some neighboring agents. In addition to providing an alternative solution for other formation approaches (position-, displacement-, distance-based formation control, etc.) [16 ###reference_b16###], bearing-only control laws are preferred since they reduce the number of sensors used by each agent, cut down on deployment cost, and do not transmit any signals [30 ###reference_b30###]. In addition, research results from bearing-constrained formation control are applicable to the dual problem - bearing-based localization in wireless sensor networks [35 ###reference_b35###].\nThe theoretical basis of bearing-constrained formation control in -dimensional space () was developed in [34 ###reference_b34###, 35 ###reference_b35###, 33 ###reference_b33###]. Several initial studies on the bearing/directional rigidity theory in two- or three-dimensional space can be found in [6 ###reference_b6###, 2 ###reference_b2###, 20 ###reference_b20###, 27 ###reference_b27###]. As robustness is an importance issue of any multiagent system, consensus and formation control under disturbances were studied by [3 ###reference_b3###, 9 ###reference_b9###, 4 ###reference_b4###, 28 ###reference_b28###].\nAlthough disturbances can be actively included for additional objectives such as escaping from an undesired unstable formation [26 ###reference_b26###], or formation maneuver [5 ###reference_b5###], the presence of unknown disturbances usually makes the target formation unachievable or causes unexpected formation motions. Robust bearing-constrained formation acquisition/tracking have recently been proposed in the literature. The works [13 ###reference_b13###, 14 ###reference_b14###, 32 ###reference_b32###, 31 ###reference_b31###, 12 ###reference_b12###] assumed the leaders\u2019 velocity and the disturbances are constant, or their upper bounds are known by the agents, or the rate of bearings is available. The work [8 ###reference_b8###] proposed an elevation-based bearing-only formation control with disturbance rejection for single- and double-integrators. However, the method in [8 ###reference_b8###] is only effective for minimally rigid formations, and for double integrators, velocity measurements are required. The authors in [29 ###reference_b29###] studied bearing-only formation control with fault tolerance and time delays. Actuator faults were modeled as a disturbance of unknown control direction, which can be compensated by a control action with an appropriate control gain. The authors in [1 ###reference_b1###] proposed a robust adaptive design method to attenuate the effects of the disturbances to satisfy a specific performance requirement. The authors in [25 ###reference_b25###] considered the bearing-only acyclic formation tracking problem with unknown leaders\u2019 velocity using two time-varying gains. Formation maneuver via bearing-only estimation for time-varying leader-follower formations was also proposed in [10 ###reference_b10###, 22 ###reference_b22###, 23 ###reference_b23###]. A moving target formation was cooperatively estimated from the measured bearing vectors, and each follower controls its position to track its estimated target position.\nThis paper focuses on the bearing-based leader-follower formation control problems with single-integrator modeled agents perturbed by unknown and bounded uniformly continuous disturbances. By bearing-based, we assume that the geometric constraints that define the target formation are given as a set of bearing vectors. There are several leaders in the formation, whose positions already satisfy a subset of bearing constraints. The remaining agents, called followers, can measure either (i) the relative positions (displacement-based control) or (ii) the bearing vector (bearing-only control) to their neighbors. The interaction topology between agents is not restricted into an acyclic graph, but is applicable to any infinitesimal bearing rigid formation having at least two leaders.\nUnlike [24 ###reference_b24###], where a disturbance-free finite-time bearing-only formation control was studied or a small adaptive perturbation was purposely included to globally stabilize the target formation in finite time, the disturbances in this work represent unmodeled dynamics or the effects of the environment. The problem is firstly solved under the assumption that the agents can measure the relative displacements. The solution for relative-displacement provides hints for the more challenging task of stabilizing the desired formation when agents can only sense the bearing vectors. Intuitively, since no information on the distances is available, in order to suppress the disturbances with unknown magnitude, the gain of the bearing-only control law should be increased whenever all bearing constraints are not satisfied. This intuition is mathematically realized by adaptive variable-structure control (also known as adaptive sliding mode control), which can provide fast convergence and robustness with regard to disturbances [17 ###reference_b17###, 18 ###reference_b18###]. The main novelty of the proposed control laws is providing a distributed adaptive mechanism that alters the magnitude of the control law with regard to the errors between the desired and the measured bearing constraints. In this way, the control input eventually approximates the magnitude of the disturbance, rejects the disturbance and stabilizes the target formation without requiring any inter-agent communication, a priori information on the upper bound of the disturbance, or the formation\u2019s rigidity index.111Specifically, the smallest eigenvalue of the grounded bearing Laplacian is not needed for stabilizing the formation under unknown disturbances. Modifications of the control laws are proposed to alter the adaptive gains based on the disturbance\u2019s magnitude or to stabilize the target formation in case the upper bound of the unknown disturbance is a polynomial of the formation\u2019s error. Moreover, when the leaders move with the same bounded uniformly continuous velocity, their motions can be considered as disturbances to the bearing errors dynamics of the followers. Thus, the proposed adaptive control laws can also be applied to stabilize a time-varying target formation. To sum up, for formations of single integrators, the proposed control laws provide a unified solution to two problems: leader-follower formation control with unknown disturbances and formation tracking with unknown leaders\u2019 velocity.\nThe rest of this paper is organized as follows. Section II ###reference_### presents theoretical background on bearing rigidity theory and formulates the problems. Sections III ###reference_### and IV ###reference_### propose formation control/tracking laws using only displacements and/or only bearing vectors, respectively. Section VI ###reference_### provides numerical simulations. Lastly, section VII ###reference_### concludes the paper.\nNotations. In this paper, the set of real numbers is denoted by . Scalars are denoted by small letters, and vectors (matrices) are denoted by bold-font small (capital) letters. For a matrix , we use , to denote the kernel and the image of , and rank denotes the rank of . The 2-norm and 1-norm of a vector are respectively denoted as and . The identity matrix is denoted by , denote the zero matrix, and denotes the -dimensional zero vector."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Problem statement",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Bearing rigidity theory",
21
+ "text": "Consider a set of points in -dimensional space (). The points are positioned at , with . A framework (or a formation) in the -dimensional space () is specified by an undirected graph (where is the vertex set of vertices and is the edge set of edges without self-loops) and a configuration . The neighbor set of a vertex is defined by . The graph is connected if for any two vertices , we can find a sequence of vertices connected by edges in , which starts from and ends at .\nLet the edges in be indexed as . For each edge , the bearing vector pointing from to is defined by\n, with is the displacement vector between and . It is not hard to check that , where denotes the 2-norm. An edge is oriented if we specify and as the start and the end vertices of , respectively. According to an arbitrarily indexing and orienting of edges in , we can define a corresponding incidence matrix , where if is the start vertex of , if is the end vertex of , and , otherwise. Then, we can define the stacked displacement vector , where .\nFor each bearing vector , we define a corresponding projection matrix . The projection matrix is symmetric positive semidefinite, with a unique zero eigenvalue and unity eigenvalues. Moreover, the kernel of is spanned by , i.e., kerim.\nTwo formations () and () are bearing equivalent if and only if: . They are bearing congruent if and only if\n. A formation () is called globally bearing rigid if any formation having the same bearing constraints with is bearing congruent with .\nLet , the bearing rigidity matrix is defined by\nA formation is infinitesimally bearing rigid in if and only if , this means , where is the formation\u2019s centroid. An example of infinitesimally bearing rigid framework is shown in Fig. 1 ###reference_###.\n###figure_1### ###figure_2### In bearing-based formation, we usually use an augmented bearing rigidity matrix , which has the same rank as well as the same kernel as but does not contain information of the relative distances between the agents . Further, we define the bearing Laplacian which is symmetric and positive semidefinite. For an infinitesimally rigid formation, has exactly zero eigenvalues and kerker."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Problem formulation",
27
+ "text": "Consider a system consisting of autonomous agents in the -dimensional space (), of which the positions are given by . We assume that there exist stationary leader agents in the formation and the remaining agents are followers.\nDefining the vectors and . When the leaders are stationary, we can write .\nThe follower agents are modeled by single integrators in the -dimensional space with the disturbances:\nwhere and denote the position and the disturbance of agent , respectively.\nThe desired formation , where , is defined as follows:\nThe desired formation satisfies\nLeaders\u2019 positions: , and\nBearing constraints: .\nThe following assumption will be used throughout the paper.\nThe formation of leaders and followers satisfies\nThe axes of the local reference frames of agents are aligned.\nThe follower agents are modeled by (1 ###reference_###) and there is no collision between agents.\nThe disturbance vector is bounded and uniformly continuous. The direction and the upper bound of the disturbance (denoted as ) are unknown to the agents.\nThe target formation is infinitesimally bearing rigid in and .\nBy stacking the set of desired bearing vectors as , we have\nUnder the assumption that is infinitesimally bearing rigid in and , it has been shown in [35 ###reference_b35###] that is symmetric positive definite and thus invertible. As a result, the desired formation is uniquely determined from the the leaders\u2019 positions and the bearing vectors by .\nTo achieve a target formation, the agents need to sense some geometric variables relating to the formation. Two types of relative sensing variables, namely, the displacements , and the bearing vectors , , will be considered in this paper. We can now formulate two problems which will be studied in the next sections.\nLet Assumption 1 ###reference_umption1### hold and the agents can sense the relative displacements. Design control laws for agents such that as .\nLet Assumption 1 ###reference_umption1### hold and the agents can sense the bearing vectors. Design control laws for agents such that as ."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III Displacement-based formation control",
33
+ "text": "In this section, the bearing-based formation control under disturbance is considered under the assumption that the agents can measure the displacement vectors with regard to their neighbors. First, an adaptive variable structure control law which can provide asymptotic convergence of the target formation is proposed. Then, the proposed control law is modified to deal with different assumptions of the disturbances as well as the control objectives."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Proposed control law",
39
+ "text": "Consider the Problem 1 ###reference_blem1###, the control law is proposed as follows\nwhere, corresponding to each edge , the matrix can be computed from the desired bearing vector , the scalar are adaptive gains, which satisfy , and is a positive constant. As the leaders are stationary, for . In the following analysis, let , , , , , and .\nThe system under the proposed control law (3 ###reference_###) can be expressed in the following form:\nwhere and . For brevity, the short-hands , , and will be used in the subsequent analysis."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-B Stability analysis",
45
+ "text": "In this subsection, the stability of the system (4 ###reference_###) is considered. Since the right-hand-side of Eqn. (4 ###reference_###)(a) is discontinuous, the solution of (4 ###reference_###)(a) is understood in Fillipov sense [21 ###reference_b21###]. It will be proved that converges to as under the proposed control law (3 ###reference_###).\nConsider the Problem 1 ###reference_blem1###. If , . Under the control law (3 ###reference_###), in finite time.\nLet , and consider the Lyapunov function , which is positive definite, radially unbounded, and bounded by two class functions and , for any . Then, , where and stands for almost everywhere [21 ###reference_b21###]. It follows that\nNote that in the third equality, we have used the fact that , and the inequality (5 ###reference_###) follows from the fact that . Based on the norm inequality for a vector , we can further write\nSubstituting the inequality into equation (6 ###reference_###), we get\nWe prove finite-time convergence of the desired formation by contradiction. If there does not exist a finite time such that , and , then it follows from (7 ###reference_###) that\nor i.e.,\nWhen , the right hand side of the inequality (9 ###reference_###) becomes negative, which causes a contradiction. This contradiction implies that and for . Thus, we conclude that in finite time. An upper bound for is thus .\n\u220e\nLemma 1 ###reference_ma1### suggests that if initially, the gains have been chosen sufficiently large (to dominate ), the desired formation is achieved in finite time. However, some quantities such as the smallest eigenvalue of the grounded bearing Laplacian and the number of agents are usually unavailable. The proposed adaptive mechanism (11d ###reference_.4###) makes the agents achieve the desired formation without requiring any a-priori information on the number of agents , the desired formation\u2019s structure and the upper bound of the disturbance.\nConsider the Problem 1 ###reference_blem1###. Under the control law (3 ###reference_###), the following statements hold:\n, as ,\nThere exists a constant vector , such that , as ,\nAdditionally, if , and there exists a finite time such that then in finite time.\n(i) Consider the Lyapunov function , for some . is positive definite with regard to , radially unbounded, and bounded by two class functions and . Similar to the proof of Lemma 1 ###reference_ma1###, we have\nwhich implies that , are bounded, and . Since is uniformly continuous, it follows from Barbalat\u2019s lemma that , or , as .\n(ii) Since is bounded and non-increasing, it has a finite limit. Thus, there exists such that , as .\n(iii) If there exists a finite time such that then for all , the inequality (7 ###reference_###) holds. Therefore, the proof of this statement follows directly from the proof of Lemma 1 ###reference_ma1###.\n\u220e\nSince the control law (3 ###reference_###) uses only signum functions, chattering will be unavoidable. To reduce the magnitude of chattering, a proportional control term into (3 ###reference_###a) can be included as follows:\nfor . If there is no disturbance, the proportion control term is sufficient for achieving the target formation. When disturbances exist, the proportion term contributes to the formation acquisition and disturbance rejection objectives, at a slower rate in comparison with the signum term.\nAn issue with the control law (3a ###reference_1###)\u2013(3c ###reference_3###) is that the control gains is non-decreasing at any time . Thus, if the disturbance has a high magnitude for a time interval, and then decreases in time, much control effort will be wasted. To address this issue, we may relax the objective from perfectly achieving a target formation into achieving a good approximation of the target formation. More specifically, we may control the formation under disturbances to reach a small neighborhood of the desired formation in finite-time while the control magnitude estimates the unknown upper bound of the disturbance [17 ###reference_b17###]. A corresponding modified formation control law is then proposed as follows:\nwhere , and are positive constants. For each ,\nSimilar to the proof of Theorem 1 ###reference_orem1###, consider the Lyapunov function , where . We have,\nLet , we have,\nfor some . Thus, when , we have , or . Thus, is globally ultimately bounded. Defining the ball , then enters the ball after a finite time. It follows that after a finite time.\nIt is worth noting that by relaxing the control objective, we also further reduce the chattering behaviors of the formation in both magnitude and switching frequency. Most control efforts are provided to maintain the formation error inside a closed ball, of which the radius is jointly determined by the desired formation (number of bearing constraints and the minimum eigenvalue ) and the control parameters (proportional control gain , adaptation rate , and the decay rate ). Other methods for avoiding chattering may be softening the sign function by the tanh() function [8 ###reference_b8###], or considering a deadzone once error is small enough. Nevertheless, all above mentioned methods need to sacrifice the control performance for eradication of chattering.\nIn the next remark, we further consider a larger class of the disturbance acting on the formation. Let the upper bound of the disturbance be a polynomials of the formation\u2019s error. The main idea is to design adaptive law for each coefficient term [18 ###reference_b18###].\nSuppose that the upper bound of the unknown disturbance acting on the formation satisfies\n, where are unknown positive constants.\nThe following adaptive formation control law is proposed:\nFor stability analysis, let , , and consider the Lyapunov candidate function\nwhere . Then,\nIt follows that\nIt follows that and are uniformly bounded. Similar to the proof of Theorem 1 ###reference_orem1###, we can show that , or , as , and exists. Further, if , where \u201c\u201d is understood to be element-wise, then in finite time."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "IV Bearing-only based formation control",
51
+ "text": "In this section, we further assume that the agents can measure only the relative bearing vectors with regard to their neighbors. We propose a corresponding adaptive variable-structure bearing-only formation control law and showed that the desired formation can be asymptotically achieved. Moreover, due to the adaptive gains, the effects of unknown time-varying disturbances acting on formation can be completely rejected even when the followers agents are not given any information of the disturbances\u2019 upper bound."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "IV-A Proposed control law",
57
+ "text": "Consider the system of single-integrator agents with disturbance (1 ###reference_###). The bearing-only control law for each follower agent is proposed as follows\nDenoting , we can express the -agent system under the control law (17a ###reference_.1###)\u2013(17b ###reference_.2###) in vector form as follows:\nwhere , and . It is clear that the control law of each agent uses only the bearing vectors with regard to its neighboring agents."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-B Stability analysis",
63
+ "text": "This subsection studies the stability of the -agent system (18a ###reference_.1###)\u2013(18b ###reference_.2###). Particularly, we show that the desired formation defined as in Definition 1 ###reference_inition1### will be asymptotically achieved as . Since the right-hand-side of Eq. (18a ###reference_.1###) is discontinuous, we understand the solution of (18a ###reference_.1###) in Fillipov sense.\nWe will firstly prove the following lemma.\n[32 ###reference_b32###, Lemma 2] Suppose that no agents coincide in or . The following inequality holds\nwhere the equality holds if and only if .\n[32 ###reference_b32###, Lemma 3] Suppose that no agents coincide in or , then\nFurthermore, if then\nNext, we prove that the adaptive bearing-only control law (17 ###reference_###) guarantees boundedness of the formation\u2019s error in the following lemma.\nConsider the Problem 2 ###reference_blem2### and suppose that there is no collision between agents for . Under the control law (17 ###reference_###), the formation error is uniformly bounded, as and there exists constant vector such that .\nConsider the Lyapunov function\nwhere . Then, if and only if , or equivalently, and . Since is infinitesimally rigid and , the equality implies that . We have\nIt follows that , and are always bounded.\nFurther, from the inequalities\nand Eqn. (22 ###reference_###), we have\nwhich shows that is bounded. Suppose that , i.e.,\nLet and , the inequality\nhas solution\nThus, is uniformly bounded. As there is no collision between agents, is uniformly continuous, and thus as based on Barbalat\u2019s lemma. This implies that as . Moreover, since are bounded and nonincreasing, , it follows that there exists such that .\n\u220e\nThe following lemma gives a sufficient condition for collision avoidance between neighboring agents.\nConsider the Problem 2 ###reference_blem2###. Suppose that , where ,\nthen .\nFor each , we can write . Thus,\nIt follows from (28 ###reference_###) that . Thus, we have\nor in other words, no collision happens for all .\n\u220e\nIn the following theorem, a sufficient condition for stabilizing the desired target formation will be given.\nConsider the Problem 2 ###reference_blem2###. Under the adaptive bearing-only control law (17 ###reference_###), there exists a positive constant such that if the Lyapunov function in (24 ###reference_###) satisfies , then , , and there exists such that .\nFrom Eqn. (28 ###reference_###), we obtain\nThus, for sufficiently small, the inequality can always be satisfied. It follows from Lemma 5 ###reference_ma5### that no collision can happen, and As collision avoidance is ensured, , are uniformly continuous, and so is . The remaining proof is similar to the proof of Lemma 4 ###reference_ma4### and will be omitted.\n\u220e\nSimilar to Remark 2 ###reference_ark2###, we may relax the objective from perfectly achieving a target formation into achieving a good approximation of the target formation with the following modified bearing-only formation control law\nwhere and . Clearly,\n Denoting , similar to the proof of Lemma 4 ###reference_ma4### and the analysis in Remark 2 ###reference_ark2###, consider the Lyapunov function , where . We have,\nThe Cauchy-Schwartz inequality gives\nNext, using Lemma 2 ###reference_ma2###, the inequalities (27 ###reference_###), and Lemma 3 ###reference_ma3###, there holds\nwhere we have assumed that .\nUsing the inequalities (26 ###reference_###) and (29 ###reference_###), and let , we have\nWith such that , by similar arguments as in Lemma 4 ###reference_ma4###, we have . Thus, and . Notice that is strictly increasing in .\nNow, let with , we have , and thus if increases, there must be a time interval such that . During such a time in interval,\nfor . If\nfor\nwe still have , and thus the inequality holds. This implies that will decrease, and thus , . Finally, the inequality (33 ###reference_###) is always feasible given that is selected sufficiently small and , are selected large enough."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Application in formation tracking",
69
+ "text": "Let the leaders move with the same velocity , which is assumed to be a bounded, uniformly continuous function. The desired formation in Definition 1 ###reference_inition1### is now time-varying, with . Thus, it is assumed that is infinitesimally rigid in . We will show that the adaptive formation control laws (3 ###reference_###) and (17 ###reference_###) are still capable of stabilizing the desired leader-follower formation.\nThe motion of the -agent system under the control law (3 ###reference_###) is now given in matrix form as follows:\nLet , then\nSuppose that the displacement-based control law (3 ###reference_###) is adopted for followers, we have\nwhich is of the same form as (4 ###reference_###), but having an additional disturbance term . Thus, the following theorem can be proved.\nConsider the -agent system (35 ###reference_###) under the displacement-based control law (3 ###reference_###), the following statements hold:\n, as ,\nThere exists a constant vector , such that , as ,\nAdditionally, if , and there exists a finite time such that then in finite time.\nThe proof is similar to the proof of Theorem 1 ###reference_orem1### and will be omitted.\n\u220e\nFinally, if the bearing-only control law (17 ###reference_###) is adopted for followers, the -agent formation can be expressed in matrix form as\nwhich is of the same form as (18a ###reference_.1###)\u2013(18b ###reference_.2###), but having an additional unknown disturbance term . We have the following theorem, whose proof is similar to the proof of Theorem 2 ###reference_orem2### and will be omitted.\nConsider the -agent system (36 ###reference_###) under the adaptive bearing-only based control law (17 ###reference_###). There exists a positive constant such that if the Lyapunov function in (24 ###reference_###) satisfies , there will be no collision between agents in formation, , and , for some constant vector .\nIn formation tracking, the leaders\u2019 trajectories can be embedded into each leader from the beginning, or can be remotely regulated by a control center. The leader agents are assumed to be equipped with a better positioning system, so that their positions are available for control and monitoring objectives. Suppose that the leaders are also subjected to bounded unknown disturbances, i.e.,\nwhere . To ensure that the leaders track their desired trajectories , and thus, eventually act as moving references for follower agents, the following position tracking law is respectively proposed\nwhere . By considering the Lyapunov function , we can prove that in finite time.\nIn this remark, we discuss the implementation of the bearing-only formation control laws. For indoor (laboratory) environments, the bearing vectors can be obtained from a single omnidirection camera mounted on the agent. Another setup is using an indoor localization system, which localizes agents\u2019 positions, calculates the bearing vectors, and sends this information to each agent to determine a corresponding control input [11 ###reference_b11###, 32 ###reference_b32###]. For outdoor implementation, the authors in [19 ###reference_b19###] proposed to use four cameras attached to four sides of a quadcopter to obtain bearing vector information from different directions. The limited field-of-view of a camera can be considered in the control law, as proposed in [7 ###reference_b7###]."
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "VI Simulation results",
75
+ "text": "In this section, we provide a few simulations to demonstrate the effectiveness of the formation control laws proposed in Sections III ###reference_###, IV ###reference_###, and VI ###reference_###. In all simulations, the target formation is described by a graph of 20 vertices and 39 edges and a desired configuration (a dodecahedron) as depicted in Figure 1 ###reference_###. It can be checked that is infinitesimally bearing rigid in 3D. In the simulations, there are leaders and followers.\n###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10###"
76
+ },
77
+ {
78
+ "section_id": "6.1",
79
+ "parent_section_id": "6",
80
+ "section_name": "VI-A Bearing-based formation control with disturbance rejection",
81
+ "text": "First, we simulate the formation with the control law (3 ###reference_###).\nLet each follower be modeled by a single integrator with disturbance given as\nwhere .\nThe control law (3 ###reference_###) is used with and are randomly generated on the interval . Simulation results are given as in Fig. 2 ###reference_###.\nAccording to Figs. 2(a) ###reference_sf1###, 2(c) ###reference_sf3###, and 2(d) ###reference_sf4###, for seconds, the desired formation is asymptotically achieved and the adaptive gains increase until the corresponding bearing constraint is stabilized. From seconds, the magnitude of the disturbance suddenly increases, which drives the agents out of the desired formation. The errors invoke the adaptive mechanism, increase again. It can be seen from Figs. 2(b) ###reference_sf2###, 2(c) ###reference_sf3###, and 2(d) ###reference_sf4### that followers are driven out from their desired positions from 40 to 55 seconds, as the magnitudes of their formation control laws are not big enough to counter the disturbance. From 55 to 80 seconds, when are sufficiently large, the agents are pulling back to the desired positions, and the desired formation is eventually achieved.\nSecond, we conduct a simulation of the formation under the adaptive control law with increasing/decreasing gains (11 ###reference_###). The disturbance acting on a follower in this simulation is given as\nWith , (proportional gain), (rate of adaptation), (leakage coefficient), and being chosen the same as the previous simulation, we obtain the simulation results as depicted in Fig. 3 ###reference_###.\nAs shown in Figs. 3(a) ###reference_sf1###, 3(c) ###reference_sf3###, and 3(d) ###reference_sf4###, for seconds, the adaptive gains increase and the control law drives the agents to a neighborhood of the desired formation. Due to the existence of a leakage term in (11 ###reference_###)(c), once a desired bearing constraint is sufficiently small, tends to reduce their values from 15 to 30 seconds. The decrements of make the formation errors raise again, however, remains on a small ball centered at , whose radius is jointly determined by the controller\u2019s parameters, the desired formation, and the magnitude of the unknown disturbance.\nFrom to seconds, as the magnitude of the disturbance is doubled, the agents are out from . As the errors increase, the term dominates the leakage term in the adaptive mechanism (11 ###reference_###)(c), and thus increase again. It can be seen from Figs. 3(b) ###reference_sf2###, 3(c) ###reference_sf3###, and 3(d) ###reference_sf4### that followers are driven further from their desired positions from 30 to about 38 seconds, and then being attracted to a ball centered at , with , from 38 to 65 seconds. For , the bearing constraints are sufficiently small, it can be seen that decrease again due to the leakage term. For , as the disturbance magnitude decreased to 0.1, as satisfy the requirement of Lemma 1 ###reference_ma1###, converges to after a short time ( at s). However, from s, because the leakage term is the only active term in (11 ###reference_###)(c), decreases. Gradually, once the control law cannot fully reject the disturbance, the disturbances make out of . The control law will still keep inside a ball centered at , with ."
82
+ },
83
+ {
84
+ "section_id": "6.2",
85
+ "parent_section_id": "6",
86
+ "section_name": "VI-B Bearing-only formation control with disturbance rejection",
87
+ "text": "In this subsection, we simulate the adaptive bearing-only control law (17 ###reference_###) for the 20-agent system. The simulation\u2019s parameters are , and .\n###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### The disturbance of each follower in this simulation is given as\nThe simulation results are depicted in Fig. (4 ###reference_###). For (second), there is no disturbance acting on the formation, the control law stabilizes to after about 2 seconds. The adaptive gains increase correspondingly in and remain unchanging until , when there are disturbances acting on the agents. Due to the presence of the disturbances, leaves the target configuration , the bearing errors make increase. In turn, the control law\u2019s magnitude increases and is eventually capable of suppressing the disturbance from s. For s, approaches to . Approximately, reached to after seconds, and cease to increase as the bearing constraints were almost satisfied. For s, as the disturbances increase their magnitudes, leaves again. The adaptive gains increase correspondingly, and eventually pull back to . It can be seen that the increment of is relatively slower than other displayed adaptive gains for s. Chattering phenomenon can also be seen due to the disturbances (for and s), which causes significant fluctuations of around ."
88
+ },
89
+ {
90
+ "section_id": "6.3",
91
+ "parent_section_id": "6",
92
+ "section_name": "VI-C Bearing-based formation tracking",
93
+ "text": "In this subsection, we simulate the formation (35 ###reference_###) with moving leaders. The leaders\u2019 velocities are chosen as\nThe simulation\u2019s parameters are , . The initial positions of the agents are the same as in the previous simulation. Disturbances are not included in the simulation.\n###figure_17### ###figure_18### ###figure_19### ###figure_20### Simulation results are shown in Fig. 5 ###reference_###. It can be seen from Fig. 5(b) ###reference_sf2### that for seconds, the formation\u2019s error increases because the adaptive control gains , which specify magnitude of the control input, is still quite small. For second, the formation\u2019s error decreases to 0. Fig. 5(c) ###reference_sf3### shows that the adaptive gains tend to increase for second, and after the desired formation has been achieved (approximately at second), remain unchanged. The magnitude of the control input versus time is correspondingly displayed in Fig. 5(d) ###reference_sf4###, which varies accordingly to the adaptive gains and the leaders\u2019 velocity."
94
+ },
95
+ {
96
+ "section_id": "6.4",
97
+ "parent_section_id": "6",
98
+ "section_name": "VI-D Bearing-only formation tracking",
99
+ "text": "In this subsection, we simulate the formation with moving leaders (36 ###reference_###). The initial positions of the agents and the leaders\u2019 velocities are chosen the same as the previous simulation in subsection VI-C ###reference_###. The simulation\u2019s parameters are , . The disturbances acting on agents are chosen as\nSimulation results are depicted in Fig. 6 ###reference_###. For s, no disturbances acting on agents, and the desired moving formation is tracked after about 11 seconds. are increasing during this time period. The behavior of the system is quite similar to the previous simulation. However, it is observed that the bearing-only control law (17 ###reference_###) gives a relatively faster convergence rate then the displacement-based control law (3 ###reference_###). This can be explained by the fact that in (3 ###reference_###), the displacement are projected into im. This makes becoming relatively small, especially when the angles between and are small. In contrast, the control law (17 ###reference_###) uses only the sign of the bearing error, which is dimensionless.\nFor s, due to the presence of the disturbances, temporally cannot track (Figs. 6(a) ###reference_sf1###\u20136(b) ###reference_sf2###). Correspondingly, as depicted in Figs. 6(c) ###reference_sf3###\u20136(d) ###reference_sf4###, the adaptive gains and the control magnitude increase again. As is large enough, the control law simultaneously rejects the disturbance and renders the agents to their desired moving target point (approximately after 27 seconds).\n###figure_21### ###figure_22### ###figure_23### ###figure_24###"
100
+ },
101
+ {
102
+ "section_id": "7",
103
+ "parent_section_id": null,
104
+ "section_name": "VII Conclusions",
105
+ "text": "The bearing-constrained formation control with unknown bounded disturbances has been studied for two types of measurements: displacements and bearing vectors. The proposed control laws can adapt the control magnitudes separately for each bearing constraint whenever the desired constraint has not been satisfied. Once the control magnitudes have exceeded the magnitude of the disturbances, it is possible to stabilize the desired configuration in finite time. Since the disturbance\u2019s magnitude may increase after the desired formation has been achieved, it may temporarily make the agents leave the desired configuration. The magnitude of the control laws will then increase accordingly to cope with the disturbances and eventually stabilize the target formation again. This process can be repeated as long as there is disturbance and control gains which always depend on the constraints\u2019 errors. Several modifications of the proposed control laws with regard to the upper bounds of the matched disturbance and the error\u2019s bound have been also discussed. Notably, the formation tracking problem with unknown bounded leaders\u2019 velocity can be also solved with the proposed control framework.\nThe theoretical results on bearing-based formation control has been rapidly filled up in recent years. Further research interests will gradually be shifted toward the implementation of bearing-only algorithms on formations of unmanned vehicles, possibly by combining the state-of-the-art theoretical findings with vision-based and machine-learning techniques."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {},
110
+ "image_paths": {
111
+ "1(a)": {
112
+ "figure_path": "2305.19353v5_figure_1(a).png",
113
+ "caption": "(a)\nFigure 1: An infinitesimally bearing rigid framework (\ud835\udca2,\ud835\udc29\u2217)\ud835\udca2superscript\ud835\udc29(\\mathcal{G},\\mathbf{p}^{*})( caligraphic_G , bold_p start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT ) in \u211d3superscript\u211d3\\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT. (a) the graph \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G; (b) a desired configuration \ud835\udc29\u2217superscript\ud835\udc29\\mathbf{p}^{*}bold_p start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT where \ud835\udc29i\u2217,i=1,\u2026,20,formulae-sequencesuperscriptsubscript\ud835\udc29\ud835\udc56\ud835\udc561\u202620\\mathbf{p}_{i}^{*},i=1,\\ldots,20,bold_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT , italic_i = 1 , \u2026 , 20 , are located at the vertices of a dodecahedron.",
114
+ "url": "http://arxiv.org/html/2305.19353v5/extracted/6029143/GraphG.png"
115
+ },
116
+ "1(b)": {
117
+ "figure_path": "2305.19353v5_figure_1(b).png",
118
+ "caption": "(b)\nFigure 1: An infinitesimally bearing rigid framework (\ud835\udca2,\ud835\udc29\u2217)\ud835\udca2superscript\ud835\udc29(\\mathcal{G},\\mathbf{p}^{*})( caligraphic_G , bold_p start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT ) in \u211d3superscript\u211d3\\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT. (a) the graph \ud835\udca2\ud835\udca2\\mathcal{G}caligraphic_G; (b) a desired configuration \ud835\udc29\u2217superscript\ud835\udc29\\mathbf{p}^{*}bold_p start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT where \ud835\udc29i\u2217,i=1,\u2026,20,formulae-sequencesuperscriptsubscript\ud835\udc29\ud835\udc56\ud835\udc561\u202620\\mathbf{p}_{i}^{*},i=1,\\ldots,20,bold_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u2217 end_POSTSUPERSCRIPT , italic_i = 1 , \u2026 , 20 , are located at the vertices of a dodecahedron.",
119
+ "url": "http://arxiv.org/html/2305.19353v5/extracted/6029143/FTdisp_config.png"
120
+ },
121
+ "2(a)": {
122
+ "figure_path": "2305.19353v5_figure_2(a).png",
123
+ "caption": "(a)\nFigure 2: Simulation 1a: the 20-agent system under the control law (3). (a) Trajectories of agents from 0 to 40 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions (t=40\ud835\udc6140t=40italic_t = 40 sec) are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 40 to 80 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.",
124
+ "url": "http://arxiv.org/html/2305.19353v5/x1.png"
125
+ },
126
+ "2(b)": {
127
+ "figure_path": "2305.19353v5_figure_2(b).png",
128
+ "caption": "(b)\nFigure 2: Simulation 1a: the 20-agent system under the control law (3). (a) Trajectories of agents from 0 to 40 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions (t=40\ud835\udc6140t=40italic_t = 40 sec) are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 40 to 80 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.",
129
+ "url": "http://arxiv.org/html/2305.19353v5/x2.png"
130
+ },
131
+ "2(c)": {
132
+ "figure_path": "2305.19353v5_figure_2(c).png",
133
+ "caption": "(c)\nFigure 2: Simulation 1a: the 20-agent system under the control law (3). (a) Trajectories of agents from 0 to 40 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions (t=40\ud835\udc6140t=40italic_t = 40 sec) are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 40 to 80 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.",
134
+ "url": "http://arxiv.org/html/2305.19353v5/x3.png"
135
+ },
136
+ "2(d)": {
137
+ "figure_path": "2305.19353v5_figure_2(d).png",
138
+ "caption": "(d)\nFigure 2: Simulation 1a: the 20-agent system under the control law (3). (a) Trajectories of agents from 0 to 40 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions (t=40\ud835\udc6140t=40italic_t = 40 sec) are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 40 to 80 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.",
139
+ "url": "http://arxiv.org/html/2305.19353v5/x4.png"
140
+ },
141
+ "3(a)": {
142
+ "figure_path": "2305.19353v5_figure_3(a).png",
143
+ "caption": "(a)\nFigure 3: Simulation 1b: the 20-agent system under the control law (11). (a) Trajectories of agents from 0 to 30 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 30 to 90 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.",
144
+ "url": "http://arxiv.org/html/2305.19353v5/x5.png"
145
+ },
146
+ "3(b)": {
147
+ "figure_path": "2305.19353v5_figure_3(b).png",
148
+ "caption": "(b)\nFigure 3: Simulation 1b: the 20-agent system under the control law (11). (a) Trajectories of agents from 0 to 30 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 30 to 90 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.",
149
+ "url": "http://arxiv.org/html/2305.19353v5/x6.png"
150
+ },
151
+ "3(c)": {
152
+ "figure_path": "2305.19353v5_figure_3(c).png",
153
+ "caption": "(c)\nFigure 3: Simulation 1b: the 20-agent system under the control law (11). (a) Trajectories of agents from 0 to 30 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 30 to 90 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.",
154
+ "url": "http://arxiv.org/html/2305.19353v5/x7.png"
155
+ },
156
+ "3(d)": {
157
+ "figure_path": "2305.19353v5_figure_3(d).png",
158
+ "caption": "(d)\nFigure 3: Simulation 1b: the 20-agent system under the control law (11). (a) Trajectories of agents from 0 to 30 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 30 to 90 seconds; (c) Formation\u2019s error versus time; (d) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time.",
159
+ "url": "http://arxiv.org/html/2305.19353v5/x8.png"
160
+ },
161
+ "4(a)": {
162
+ "figure_path": "2305.19353v5_figure_4(a).png",
163
+ "caption": "(a)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.",
164
+ "url": "http://arxiv.org/html/2305.19353v5/x9.png"
165
+ },
166
+ "4(b)": {
167
+ "figure_path": "2305.19353v5_figure_4(b).png",
168
+ "caption": "(b)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.",
169
+ "url": "http://arxiv.org/html/2305.19353v5/x10.png"
170
+ },
171
+ "4(c)": {
172
+ "figure_path": "2305.19353v5_figure_4(c).png",
173
+ "caption": "(c)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.",
174
+ "url": "http://arxiv.org/html/2305.19353v5/x11.png"
175
+ },
176
+ "4(d)": {
177
+ "figure_path": "2305.19353v5_figure_4(d).png",
178
+ "caption": "(d)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.",
179
+ "url": "http://arxiv.org/html/2305.19353v5/x12.png"
180
+ },
181
+ "4(e)": {
182
+ "figure_path": "2305.19353v5_figure_4(e).png",
183
+ "caption": "(e)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.",
184
+ "url": "http://arxiv.org/html/2305.19353v5/x13.png"
185
+ },
186
+ "4(f)": {
187
+ "figure_path": "2305.19353v5_figure_4(f).png",
188
+ "caption": "(f)\nFigure 4: Simulation 2: the 20-agent system under the bearing-only control law (17). (a) Trajectories of agents from 0 to 5 seconds (leaders are marked with \u0394\u0394\\Deltaroman_\u0394, followers\u2019 initial and final positions are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Trajectories of agents from 5 to 15 seconds; (c) Trajectories of agents from 15 to 25 seconds; (d) Formation\u2019s error versus time; (e) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time. (f) Magnitude of control input versus time.",
189
+ "url": "http://arxiv.org/html/2305.19353v5/extracted/6029143/simBO3Input.png"
190
+ },
191
+ "5(a)": {
192
+ "figure_path": "2305.19353v5_figure_5(a).png",
193
+ "caption": "(a)\nFigure 5: Simulation 3: the 20-agent system with moving leaders under the control law (3). (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 and t=20\ud835\udc6120t=20italic_t = 20 sec. are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.",
194
+ "url": "http://arxiv.org/html/2305.19353v5/x14.png"
195
+ },
196
+ "5(b)": {
197
+ "figure_path": "2305.19353v5_figure_5(b).png",
198
+ "caption": "(b)\nFigure 5: Simulation 3: the 20-agent system with moving leaders under the control law (3). (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 and t=20\ud835\udc6120t=20italic_t = 20 sec. are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.",
199
+ "url": "http://arxiv.org/html/2305.19353v5/x15.png"
200
+ },
201
+ "5(c)": {
202
+ "figure_path": "2305.19353v5_figure_5(c).png",
203
+ "caption": "(c)\nFigure 5: Simulation 3: the 20-agent system with moving leaders under the control law (3). (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 and t=20\ud835\udc6120t=20italic_t = 20 sec. are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.",
204
+ "url": "http://arxiv.org/html/2305.19353v5/x16.png"
205
+ },
206
+ "5(d)": {
207
+ "figure_path": "2305.19353v5_figure_5(d).png",
208
+ "caption": "(d)\nFigure 5: Simulation 3: the 20-agent system with moving leaders under the control law (3). (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 and t=20\ud835\udc6120t=20italic_t = 20 sec. are marked with \u2018xx{\\rm x}roman_x\u2019 and \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3i\u2062jsubscript\ud835\udefe\ud835\udc56\ud835\udc57\\gamma_{ij}italic_\u03b3 start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.",
209
+ "url": "http://arxiv.org/html/2305.19353v5/x17.png"
210
+ },
211
+ "6(a)": {
212
+ "figure_path": "2305.19353v5_figure_6(a).png",
213
+ "caption": "(a)\nFigure 6: Simulation 4: the 20-agent system under the control law (17) with moving leaders. (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 are marked with \u2018xx{\\rm x}roman_x\u2019 and at t=5,15,23,30\ud835\udc615152330t=5,~{}15,~{}23,~{}30italic_t = 5 , 15 , 23 , 30s are marked with \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.",
214
+ "url": "http://arxiv.org/html/2305.19353v5/x18.png"
215
+ },
216
+ "6(b)": {
217
+ "figure_path": "2305.19353v5_figure_6(b).png",
218
+ "caption": "(b)\nFigure 6: Simulation 4: the 20-agent system under the control law (17) with moving leaders. (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 are marked with \u2018xx{\\rm x}roman_x\u2019 and at t=5,15,23,30\ud835\udc615152330t=5,~{}15,~{}23,~{}30italic_t = 5 , 15 , 23 , 30s are marked with \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.",
219
+ "url": "http://arxiv.org/html/2305.19353v5/x19.png"
220
+ },
221
+ "6(c)": {
222
+ "figure_path": "2305.19353v5_figure_6(c).png",
223
+ "caption": "(c)\nFigure 6: Simulation 4: the 20-agent system under the control law (17) with moving leaders. (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 are marked with \u2018xx{\\rm x}roman_x\u2019 and at t=5,15,23,30\ud835\udc615152330t=5,~{}15,~{}23,~{}30italic_t = 5 , 15 , 23 , 30s are marked with \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.",
224
+ "url": "http://arxiv.org/html/2305.19353v5/x20.png"
225
+ },
226
+ "6(d)": {
227
+ "figure_path": "2305.19353v5_figure_6(d).png",
228
+ "caption": "(d)\nFigure 6: Simulation 4: the 20-agent system under the control law (17) with moving leaders. (a) Trajectories of agents (leaders\u2019 trajectories are colored blue, followers\u2019 positions at t=0\ud835\udc610t=0italic_t = 0 are marked with \u2018xx{\\rm x}roman_x\u2019 and at t=5,15,23,30\ud835\udc615152330t=5,~{}15,~{}23,~{}30italic_t = 5 , 15 , 23 , 30s are marked with \u2018oo{\\rm o}roman_o\u2019, respectively); (b) Formation\u2019s error versus time; (c) A subset of the adaptive gains \u03b3isubscript\ud835\udefe\ud835\udc56\\gamma_{i}italic_\u03b3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT versus time; (d) The magnitude of the control input versus time.",
229
+ "url": "http://arxiv.org/html/2305.19353v5/x21.png"
230
+ }
231
+ },
232
+ "validation": true,
233
+ "references": [
234
+ {
235
+ "1": {
236
+ "title": "Distributed bearing-based formation control and network localization\nwith exogenous disturbances.",
237
+ "author": "Y.-B. Bae, S.-H. Kwon, Y.-H. Lim, and H.-S. Ahn.",
238
+ "venue": "International Journal of Robust and Nonlinear Control,\n32(11):6556\u20136573, 2022.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "2": {
244
+ "title": "Distributed formation control with relaxed motion requirements.",
245
+ "author": "A. N. Bishop, M. Deghat, B. D. O. Anderson, and Y. Hong.",
246
+ "venue": "International Journal of Robust and Nonlinear Control,\n25(17):3210\u20133230, 2015.",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "3": {
252
+ "title": "Distributed coordinated tracking with reduced interaction via a\nvariable structure approach.",
253
+ "author": "Y. Cao and W. Ren.",
254
+ "venue": "IEEE Transactions on Automatic Control, 57(1):33\u201348, 2011.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "4": {
260
+ "title": "Maneuvering angle rigid formations with global convergence\nguarantees.",
261
+ "author": "L. Chen, Z. Lin, H. G. De Marina, Z. Sun, and M. Feroskhan.",
262
+ "venue": "IEEE/CAA Journal of Automatica Sinica, 9(8):1464\u20131475, 2022.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "5": {
268
+ "title": "Distributed rotational and translational maneuvering of rigid\nformations and their applications.",
269
+ "author": "H. G. De Marina, B. Jayawardhana, and M. Cao.",
270
+ "venue": "IEEE Transactions on Robotics, 32(3):684\u2013697, 2016.",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "6": {
276
+ "title": "Using angle of arrival (bearing) information in network localization.",
277
+ "author": "T. Eren, W. Whiteley, and P. N. Belhumeur.",
278
+ "venue": "In Proc. of the 45th IEEE Conference on Decision and Control\n(CDC), pages 4676\u20134681. IEEE, 2006.",
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "7": {
284
+ "title": "Bearing-based autonomous communication relay positioning under\nfield-of-view constraints.",
285
+ "author": "M. Fabris and D. Zelazo.",
286
+ "venue": "Advanced Control for Applications: Engineering and Industrial\nSystems, 4(2):e103, 2022.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "8": {
292
+ "title": "Bearing-only formation control with bounded disturbances in agents\u2019\nlocal coordinate frames.",
293
+ "author": "C. Garanayak and D. Mukherjee.",
294
+ "venue": "IEEE Control Systems Letters, pages 2940\u20132945, 2023.",
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "9": {
300
+ "title": "Distributed adaptive time-varying group formation tracking for\nmultiagent systems with multiple leaders on directed graphs.",
301
+ "author": "J. Hu, P. Bhowmick, and A. Lanzon.",
302
+ "venue": "IEEE Transactions on Control of Network Systems, 7(1):140\u2013150,\n2019.",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "10": {
308
+ "title": "Bearing-based distributed formation control of multiple vertical\ntake-off and landing UAVs.",
309
+ "author": "Y. Huang and Z. Meng.",
310
+ "venue": "IEEE Transactions on Control of Network Systems,\n8(3):1281\u20131292, 2021.",
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "11": {
316
+ "title": "Bearing-only control of directed cycle formations: Almost global\nconvergence and hardware implementation.",
317
+ "author": "G. Ko, M. H. Trinh, and H.-S. Ahn.",
318
+ "venue": "International Journal of Robust and Nonlinear Control,\n30(12):4789\u20134804, 2020.",
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "12": {
324
+ "title": "Bearing-only adaptive formation control using back-stepping method.",
325
+ "author": "S. Li, Q. Wang, E. Wang, and Y. Chen.",
326
+ "venue": "Frontiers in Control Engineering, 2:700053, 2021.",
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "13": {
332
+ "title": "Bearing-based formation control of networked robotic systems with\nparametric uncertainties.",
333
+ "author": "X. Li, X. Luo, J. Wang, Y. Zhu, and X. Guan.",
334
+ "venue": "Neurocomputing, 306:234\u2013245, 2018.",
335
+ "url": null
336
+ }
337
+ },
338
+ {
339
+ "14": {
340
+ "title": "Adaptive formation control of networked robotic systems with\nbearing-only measurements.",
341
+ "author": "X. Li, C. Wen, and C. Chen.",
342
+ "venue": "IEEE Transactions on Cybernetics, 51(1):199\u2013209, 2020.",
343
+ "url": null
344
+ }
345
+ },
346
+ {
347
+ "15": {
348
+ "title": "Biologically inspired bearing-only navigation and tracking.",
349
+ "author": "S. G. Loizou and V. Kumar.",
350
+ "venue": "In Proc. of the 46th IEEE Conference on Decision and Control\n(CDC), pages 1386\u20131391. IEEE, 2007.",
351
+ "url": null
352
+ }
353
+ },
354
+ {
355
+ "16": {
356
+ "title": "A survey of multi-agent formation control.",
357
+ "author": "K.-K. Oh, M.-C. Park, and H.-S. Ahn.",
358
+ "venue": "Automatica, 53:424\u2013440, 2015.",
359
+ "url": null
360
+ }
361
+ },
362
+ {
363
+ "17": {
364
+ "title": "Adaptive sliding mode control for disturbances with unknown bounds.",
365
+ "author": "T. R. Oliveira, J. P. V. Cunha, and L. Hsu.",
366
+ "venue": "In Proc. of the 14th International Workshop on Variable\nStructure Systems (VSS), pages 59\u201364. IEEE, 2016.",
367
+ "url": null
368
+ }
369
+ },
370
+ {
371
+ "18": {
372
+ "title": "On adaptive sliding mode control without a priori bounded\nuncertainty.",
373
+ "author": "S. Roy, S. Baldi, and L. M. Fridman.",
374
+ "venue": "Automatica, 111:108650, 2020.",
375
+ "url": null
376
+ }
377
+ },
378
+ {
379
+ "19": {
380
+ "title": "Vision-based drone flocking in outdoor environments.",
381
+ "author": "F. Schilling, F. Schiano, and D. Floreano.",
382
+ "venue": "IEEE Robotics and Automation Letters, 6(2):2954\u20132961, 2021.",
383
+ "url": null
384
+ }
385
+ },
386
+ {
387
+ "20": {
388
+ "title": "Bearing-compass formation control: A human-swarm interaction\nperspective.",
389
+ "author": "E. Schoof, A. Chapman, and M. Mesbahi.",
390
+ "venue": "In Proc. of the 2014 American Control Conference (ACC),\nPortland, OR, USA, pages 3881\u20133886. IEEE, 2014.",
391
+ "url": null
392
+ }
393
+ },
394
+ {
395
+ "21": {
396
+ "title": "Lyapunov stability theory of nonsmooth systems.",
397
+ "author": "D. Shevitz and B. Paden.",
398
+ "venue": "IEEE Transactions on Automatic Control, 39(9):1910\u20131914, 1994.",
399
+ "url": null
400
+ }
401
+ },
402
+ {
403
+ "22": {
404
+ "title": "Bearing-based formation tracking control with time-varying velocity\nestimation.",
405
+ "author": "H. Su, C. Chen, Z. Yang, S. Zhu, and X. Guan.",
406
+ "venue": "IEEE Transactions on Cybernetics, 53(6):3961 \u2013 3973, 2023.",
407
+ "url": null
408
+ }
409
+ },
410
+ {
411
+ "23": {
412
+ "title": "Localization and tracking control of autonomous vehicles in\ntime-varying bearing formation.",
413
+ "author": "Z. Tang and A. Lor\u00eda.",
414
+ "venue": "IEEE Control Systems Letters, 7:1231\u20131236, 2022.",
415
+ "url": null
416
+ }
417
+ },
418
+ {
419
+ "24": {
420
+ "title": "Finite-time bearing-only formation control via distributed global\norientation estimation.",
421
+ "author": "Q. V. Tran, M. H. Trinh, D. Zelazo, D. Mukherjee, and H.-S. Ahn.",
422
+ "venue": "IEEE Transactions on Control of Network Systems, 6(2):702\u2013712,\n2019.",
423
+ "url": null
424
+ }
425
+ },
426
+ {
427
+ "25": {
428
+ "title": "Finite-time bearing-based maneuver of acyclic leader-follower\nformations.",
429
+ "author": "M. H. Trinh and H.-S. Ahn.",
430
+ "venue": "IEEE Control Systems Letters, 6:1004\u20131009, 2021.",
431
+ "url": null
432
+ }
433
+ },
434
+ {
435
+ "26": {
436
+ "title": "Bearing-based formation control of a group of agents with\nleader-first follower structure.",
437
+ "author": "M. H. Trinh, S. Zhao, Z. Sun, D. Zelazo, B. D. O. Anderson, and H.-S. Ahn.",
438
+ "venue": "IEEE Transactions on Automatic Control, 64(2):598\u2013613, 2019.",
439
+ "url": null
440
+ }
441
+ },
442
+ {
443
+ "27": {
444
+ "title": "Rigid components identification and rigidity control in bearing-only\nlocalization using the graph cycle basis.",
445
+ "author": "R. Tron, L. Carlone, F. Dellaert, and K. Daniilidis.",
446
+ "venue": "In Proc. of the American Control Conference (ACC), Chicago, IL,\nUSA, pages 3911\u20133918. IEEE, 2015.",
447
+ "url": null
448
+ }
449
+ },
450
+ {
451
+ "28": {
452
+ "title": "Decentralized sliding-mode control laws for the bearing-based\nformation tracking problem.",
453
+ "author": "D. V. Vu and M. H. Trinh.",
454
+ "venue": "In Proc. of the International Conference on Control, Automation\nand Information Sciences (ICCAIS), pages 67\u201372. IEEE, 2021.",
455
+ "url": null
456
+ }
457
+ },
458
+ {
459
+ "29": {
460
+ "title": "Distributed collision-free bearing coordination of multi-uav systems\nwith actuator faults and time delays.",
461
+ "author": "K. Wu, J. Hu, Z. Li, Z. Ding, and F. Arvin.",
462
+ "venue": "IEEE Transactions on Intelligent Transportation Systems, 2024.",
463
+ "url": null
464
+ }
465
+ },
466
+ {
467
+ "30": {
468
+ "title": "Bearing-only measurement self-localization, velocity consensus and\nformation control.",
469
+ "author": "M. Ye, B. D. O. Anderson, and C. Yu.",
470
+ "venue": "IEEE Transactions on Aerospace and Electronic Systems,\n53(2):575\u2013586, 2017.",
471
+ "url": null
472
+ }
473
+ },
474
+ {
475
+ "31": {
476
+ "title": "Bearing-only formation tracking control of multi-agent systems with\nlocal reference frames and constant-velocity leaders.",
477
+ "author": "J. Zhao, X. Yu, X. Li, and H. Wang.",
478
+ "venue": "IEEE Control Systems Letters, 5(1):1\u20136, 2020.",
479
+ "url": null
480
+ }
481
+ },
482
+ {
483
+ "32": {
484
+ "title": "Bearing-only formation tracking control of multiagent systems.",
485
+ "author": "S. Zhao, Z. Li, and Z. Ding.",
486
+ "venue": "IEEE Transactions on Automatic Control, 64(11):4541\u20134554,\n2019.",
487
+ "url": null
488
+ }
489
+ },
490
+ {
491
+ "33": {
492
+ "title": "Laman graphs are generically bearing rigid in arbitrary dimensions.",
493
+ "author": "S. Zhao, Z. Sun, D. Zelazo, M. H. Trinh, and H.-S. Ahn.",
494
+ "venue": "In Proc. of the 56th IEEE Conference on Decision and Control\n(CDC), pages 3356\u20133361. IEEE, 2017.",
495
+ "url": null
496
+ }
497
+ },
498
+ {
499
+ "34": {
500
+ "title": "Bearing rigidity and almost global bearing-only formation\nstabilization.",
501
+ "author": "S. Zhao and D. Zelazo.",
502
+ "venue": "IEEE Transactions on Automatic Control, 61(5):1255\u20131268, 2016.",
503
+ "url": null
504
+ }
505
+ },
506
+ {
507
+ "35": {
508
+ "title": "Localizability and distributed protocols for bearing-based network\nlocalization in arbitrary dimensions.",
509
+ "author": "S. Zhao and D. Zelazo.",
510
+ "venue": "Automatica, 69:334\u2013341, 2016.",
511
+ "url": null
512
+ }
513
+ }
514
+ ],
515
+ "url": "http://arxiv.org/html/2305.19353v5"
516
+ }
20241127/2307.00319v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2307.14132v4.json ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "CIF-T: A Novel CIF-based Transducer Architecture for Automatic Speech Recognition",
3
+ "abstract": "RNN-T models are widely used in ASR, which rely on the RNN-T loss to achieve length alignment between input audio and target sequence. However, the implementation complexity and the alignment-based optimization target of RNN-T loss lead to computational redundancy and a reduced role for predictor network, respectively. In this paper, we propose a novel model named CIF-Transducer (CIF-T) which incorporates the Continuous Integrate-and-Fire (CIF) mechanism with the RNN-T model to achieve efficient alignment. In this way, the RNN-T loss is abandoned, thus bringing a computational reduction and allowing the predictor network a more significant role. We also introduce Funnel-CIF, Context Blocks, Unified Gating and Bilinear Pooling joint network, and auxiliary training strategy to further improve performance. Experiments on the 178-hour AISHELL-1 and 10000-hour WenetSpeech datasets show that CIF-T achieves state-of-the-art results with lower computational overhead compared to RNN-T models.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Recurrent neural network transducer (RNN-T) models [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###] have gained significant attention because of their natural streaming capability and superior performance in ASR tasks. RNN-T is initially proposed to address the conditional independence assumption of CTC models by introducing a predictor network that serves as a language model (LM). During RNN-T training, blank symbols are added with RNN-T loss to facilitate the learning of alignments between acoustic and semantic features, making RNN-T models particularly suitable for frame-synchronous decoding. However, RNN-T needs to consider all feasible decoding paths, as illustrated in Fig. 1, which requires the probability distribution of all symbols in the utterance at each time step (usually a 4-D tensor) [5 ###reference_b5###]. This results in a high demand for training resources, which leads to a much longer training times. Similarly, excessive computational redundancy causes high prediction delay in the decoding process.\n###figure_1### Numerous efforts have been made to decrease the computational redundancy of RNN-T. Li et al. [6 ###reference_b6###] remove the padding portion of the encoder and predictor network outputs, and use a sentence-by-sentence combination instead of broadcasting. Ref [7 ###reference_b7###] first predicts the posterior probability of the encoder outputs with CTC [8 ###reference_b8###] before feeding them to the joint network, and then removes the frames predicted for the symbol blank according to a specific threshold. Considering the extensive vocabulary size, Kuang et al. [5 ###reference_b5###] propose Pruned RNN-T, which restricts the range of predictor network output at each time step to minimize the computation of RNN-T loss. Other works focus on optimizing the decoding path of RNN-T to decrease the delay in decoding process [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###].\n###figure_2### Although these methods have been successful in decreasing the redundant computation of RNN-T models, they are still the improvement of RNN-T loss. However, the utilize of RNN-T loss to constrain the model gives rise to another significant challenge. The primary optimization target of RNN-T loss is to achieve length alignment between the input audio and the target sequence, which result in an over-reliance on the predictor network to facilitate alignment. This over-reliance come at the expense of sacrificing the essential contextual semantic knowledge required for accurate transcription. As discussed in [13 ###reference_b13###], when substituting the weights of the predictor network with randomized values, the RNN-T model quality remains almost the same as the fully trainable baselines. This constrains the capacity of RNN-T for internal language modeling.\nThe primary issue causing these problems is the difficulty in handling the length mismatch between the input audio and target sequence, which poses a challenge for designing the RNN-T loss and fuse the two modalities effectively in the joint network. To address this challenge, we observe that the blank symbols emitted in the RNN-T decoding process are essentially an aggregation of acoustic features, and the successful emission of non-blank characters indicates the availability of sufficient acoustic information for prediction. As illustrated in Fig. 1 ###reference_###(a), the blue acoustic features are aggregated as the blank symbols are emitted, and when and complete the aggregation, the character is emitted. The green acoustic feature is used to emit , and the yellow acoustic features and are used to emit as the aggregation continues. This mechanism is consistent with the recently proposed soft and monotonic alignment method, known as Continuous Integrate-and-Fire (CIF) [14 ###reference_b14###, 15 ###reference_b15###]. As depicted in Fig. 1 ###reference_###(b), CIF accurately identifies acoustic boundaries and extracts acoustic features corresponding to each target symbol for prediction by accumulating the weighted at each time step. The blue, green, and yellow feature blocks indicate the corresponding acoustic features used to predict , , and , respectively. Hence, we replace the RNN-T loss with CIF module, significantly reducing the computational burden in the joint network and directly supervising the posterior probability of each predicted symbol with cross-entropy (CE) loss.\nIn this paper, we propose a novel architecture for ASR named CIF-Transducer (CIF-T), which incorporates the CIF mechanism with the RNN-T model to achieve efficient alignment. Our approach obviates the requirement for the RNN-T loss, resulting in a substantial reduction in computational redundancy and allowing predictor network to assume a more prominent role in enhancing the predictions accuracy. Due to the monotonic alignment property of CIF, it is seamless to integrate the CIF mechanism with the RNN-T model while maintaining the ability to decode in a streaming manner. During the alignment process utilizing the CIF module, a certain amount of original acoustic information may be lost. In order to mitigate this loss, we propose an extension to the CIF module called Funnel-CIF, which supplements the original information to the CIF outputs. Moreover, we introduce the Context Blocks, Unified Gating and Bilinear-Pooling (UGBP) joint network, and auxiliary training strategy to further enhance the performance. We conduct experiments on the public available 170-hour AISHELL-1 and 10000-hour WenetSpeech datasets, our proposal achieves state-of-the-art results while significantly reducing computational overhead compared to vanilla RNN-T models. Our work is the first to empower CIF-based models to surpass the performance of other auto-regressive models."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "RNN Transducer",
21
+ "text": "The RNN-T model consists of three components, which are encoder, predictor network, and joint network, respectively. Given the acoustic input and the corresponding target sequence , where represents the length of the acoustic input and denotes the number of symbols in , the output of RNN-T is calculated as follows:\nthe semantic feature of the -th symbol produced by the predictor network.\nAfter getting via the joint network, the posterior probability of the predicted symbol is:\nwhere is the weight of the classification layer, whose output dimension is , representing the number of all symbols that containing blank. is belong to , which denotes a possible RNN-T decoding path, and can be derived from by removing all symbols in it. During training, we use to represent all possible decoding paths, the RNN-T loss is given as:\nwhere and represent the values of and corresponding to in the decoded path , respectively."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Continuous Integrate-and-Fire",
27
+ "text": "The CIF module is a soft and monotonic alignment mechanism that has recently garnered attention in the speech community and demonstrated success in non-autoregressive attention-based encoder-decoder (AED) ASR models [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]. As the CIF module shown in Fig. 2 ###reference_###(b), given the encoder output , the weight is obtained by passing it through a convolution layer, a fully connected layer with one output dimension, and sigmoid activation. After that, CIF accumulates forwardly until the weight sum reaches a threshold of , indicating that the acoustic feature boundary has been found. At the boundary weight , CIF divides it into two parts, and , with equal to and . is included in the current acoustic range, while is considered as the starting weight for the next acoustic range. In the determined acoustic range, the acoustic features are integrated via a weighted summation with the corresponding and firing the integrated acoustic embedding at the boundary. This embedding can then be used in the subsequent to predict the corresponding symbol .\nDuring training, to ensure that the length of the integrated acoustic embedding generated by the CIF module matches the target text, the scaling strategy is adopt. represents the length of the target text, and after scaling, the sum of is equal to the sum of . Additional, a quantity loss is proposed to force the length of the integrated acoustic embedding to be closer to ."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "CIF-Transducer",
33
+ "text": "The proposed CIF-T retains the three components of vanilla RNN-T as in Section 2.1 ###reference_###. As shown in Fig. 2 ###reference_###(a), the CIF mechanism is used following the encoder to align the length of the acoustic features and semantic features, which are then fused in the joint network. Our experimental results demonstrate that combining the original CIF module with the vanilla RNN-T model already achieves competitive performance with a lower computational cost. However, we strive for even better results with the CIF-T, thus, the Funnel-CIF, Context Blocks, UGBP joint network, and auxiliary training strategy are proposed to improve each component and achieve a superior performance."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Funnel-CIF and Context Blocks",
39
+ "text": "As introduced in Section 2.2 ###reference_###, the CIF module integrates the acoustic features and obtains the integrated acoustic embedding by firing, where . Therefore, the alignment of the CIF module is a dynamic down-sampling process, which is accompanied with the lost of the original acoustic information. Thus, we emplpy Funnel Attention [19 ###reference_b19###] after the CIF module to supplement information as the following formulation:\nwhere the query for calculating Funnel Attention is the output of the CIF module , and the vectors key and value are the original acoustic features . Then, the reacquired information is supplemented to via a residual connection. In this way, more abundant acoustic information is preserved for later joint decoding. Additionally, as mentioned in [16 ###reference_b16###], the contextual correlation among the CIF outputs is weak. To address this problem, we employ a series of Conformer layers to act as the Context Blocks, which enhance the contextual dependencies of each token in the CIF outputs. For representation brevity, we still use to represent the output of the Context Blocks.\n###table_1###"
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Unified Gating and Bilinear Pooling Joint Network",
45
+ "text": "Many works [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###] suggest that the original fusion using a single linear layer is not effective enough. In this work, we propose the use of the Unified Gating and Bilinear Pooling joint network [20 ###reference_b20###] to achieve more effective fusion of acoustic and semantic features. The UGBP first performs dynamic selective fusion of the two modality features in the channel dimension with gating. After that, a low-rank approximation of bilinear pooling is used for a more powerful feature fusion. The procedures can be written as:\n###table_2### Thanks to the usage of the CIF mechanism, we achieve the length-aligned joint network inputs and in advance. Thus we can simply sum the two modal features and still maintain the original three-dimension, contrary to the traditional RNN-T, which requires summing the two by broadcasting to obtain a four-dimensional tensor. With this, the computational overhead of fusion is significantly reduced. Finally, the output of UGBP is obtained after shortcut connection and tanh activation.\nwhere and are the weights of fully connected layer for the transformation of and , respectively. The posterior probability of the predicted symbol is similarly calculated by Eq. 4 ###reference_###. Unlike the traditional utilize of RNN-T loss in Eq. 5 ###reference_###, we use CE loss to constrain the model."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Auxiliary Training Strategy",
51
+ "text": "In addition to constraining the CIF-T using the CE loss described in Section 3.2 ###reference_### , we also employ additional auxiliary training objectives. Specifically, we use the CTC loss and the quantity loss , as described in Section 2.2 ###reference_###, to assist in training the encoder and CIF module. Additionally, we utilize CE loss to constrain the predictor network to improve its understanding of contextual semantic information, which we refer to as . Fig. 2 ###reference_###(a) illustrates the specific implementation of these auxiliary losses. Consequently, the total loss optimized in training is as follows:\nwhere is hyper-parameters to balance different losses, and the specific values are set experimentally.\n###table_3### ###table_4###"
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Experimental Setup",
63
+ "text": "We conduct experiments on two publicly available datasets, 170-hour AISHELL-1 [28 ###reference_b28###] and 10000-hour WenetSpeech [26 ###reference_b26###]. For all experiments, we represent input vectors as a sequence of 80-dim log-Mel filter bank and set the frame length and shift to 25 ms and 10 ms respectively. The speed perturbation [29 ###reference_b29###] and SpecAugment [30 ###reference_b30###] are used before training on both datasets and the features are normalized using Global CMVN [31 ###reference_b31###].\nWe present three versions of CIF-T models, S, M, and L, which share the same two-layer reduced embedding predictor network [32 ###reference_b32###] and UGBP joint network with same model dimension of 256. We use conformer with 4 times down-sampling CNN layers as our encoder and the different encoder configurations for the three models are shown in Table 1 ###reference_###, noting that the values separated by semicolons in the \u201cLayers\u201d column represent the layers number of Encoder and Context Blocks, respectively. Our experiments are conducted using NVIDIA A100 GPUs, and we use Adam [33 ###reference_b33###] optimizer with 25000 warm-up steps for AISHELL-1 dataset and 5000 warm-up steps for WenetSpeech dataset. For decoding, we get the final model by averaging selected top-K epochs with the lowest loss on the validation set. The other configurations are same with the presents in WeNet [25 ###reference_b25###]. The hyper-parameters ,, and in Eq. 10 ###reference_### are respectively set to 1, 1, and 0.3 in all experiments."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "Results",
69
+ "text": "Table 2 ###reference_### shows the character error rate (CER) results on the AISHELL-1 dataset. We use CTC loss and LM loss on the vanilla RNN-T model as our baseline model RNN-T\u2021. When just employing the CIF mechanism to replace the alignment process in the RNN-T\u2021, CIF-T\u2021 achieves a competitive result of 4.6%/5.3% on the dev/test set compared to 4.7%/5.3% for the RNN-T\u2021 as a baseline. With the improvements proposed in this work, the CIF-T(S) and CIF-T(M) achieve consistent or superior performance compared to the best publicly available Conformer-based AED models, CIF-based models, and RNN-T-based models, with equal or smaller parameters. Furthermore, the CIF-T(L) achieves a state-of-the-art result of 4.1%/4.3% on the dev/test set. We also evaluate the results of applying UGBP on RNN-T\u2021, and despite the better fusion method improved performance, the results still fall behind our CIF-T models.\nThe CER results on the WenetSpeech are presented in Table 3 ###reference_###. CIF-T achieves a significantly lower CER results than ESPnet and Wenet with essentially the same parameters, while achieving a competitive results compared to Conformer-MoE which has more than three times parameters.\n###table_5### ###table_6### Table 4 ###reference_### provides an evaluation of the computational resource consumption of CIF-T and RNN-T models by comparing their respective maximum batch sizes on a single 40G NVIDIA A100 GPU. We utilize the Torchaudio [34 ###reference_b34###] RNN-T loss to train the RNN-T models, and ensure that both models have the same number of parameters. It can be observed that CIF-T can handle a batch size of 72, but exceeds the memory limit when the batch size is increased to 96. In contrast, RNN-T cannot train with a batch size of 32, indicating a larger bottleneck in computational overhead compared to proposed CIF-T."
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "Ablation Study and Analysis",
75
+ "text": "Table 5 ###reference_### shows the performance benefits brought by progressively adding the proposed improvement modules. Experimentally proving that the proposed UGBP, Funnel-CIF and Context Blocks are effective in improving the model performance.\nFurthermore, we investigate the role of the predictor network for RNN-T and CIF-T, both models use auxiliary training strategy and UGBP to fuse acoustic and semantic features. As shown in Table 6 ###reference_###, if we re-initialize the predictor network of the fully trainable RNN-T model randomly, the CER on the test set increases from 5.0% to 5.4%, which is only 0.4 percentage points higher, and indicating that the predictor network does not play an indispensable role for correct prediction. Conversely, for the CIF-T model, re-initializing the predictor network led to a substantial increase in CER from 4.8% to 6.7%, a difference of 1.9 percentage points. These results suggest that directly constraining the prediction probability with CE loss in CIF-T increases the dependence of the model on the semantic information provided by predictor network. Adopting this approach will be conducive to the integration of improvements related to the predictor network."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusion",
81
+ "text": "In this work, we propose a novel architecture for ASR, namely CIF-T. By using an efficient CIF mechanism in the RNN-T model instead of RNN-T loss to achieve length alignment of input audio and target sequence, we achieve a reduction in computational overhead and an enhancement role of predictor network. Additionally, we propose Funnel-CIF, UGBP joint network, Context Blocks and auxiliary training strategy to further improve the performance. Experiments on the AISHELL-1 and WenetSpeech datasets demonstrate that our approach achieves the state-of-the-art results. In the future we will take full advantage of the monotonic alignment property of CIF and explore its application on the streaming models."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {
86
+ "1": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.4.1.1\">Table 1</span>: </span>Encoder hyper-parameters for CIF-T of S, M, and L.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.5\">\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T1.5.1.1\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.1\" style=\"font-size:80%;\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.1.2\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.2.1\" style=\"font-size:80%;\">Layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.1.3\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.3.1\" style=\"font-size:80%;\">Model Dim</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.1.4\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.4.1\" style=\"font-size:80%;\">Heads</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.1.5\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.5.1\" style=\"font-size:80%;\">FFN Dim</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.5.1.6\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.1.6.1\" style=\"font-size:80%;\">Size (M)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.5.2.1\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.1.1\" style=\"font-size:80%;\">CIF-T(S)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.2\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.2.1\" style=\"font-size:80%;\">8;2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.3\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.3.1\" style=\"font-size:80%;\">256</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.4\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.4.1\" style=\"font-size:80%;\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.5\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.5.1\" style=\"font-size:80%;\">2048</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.2.6\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.2.6.1\" style=\"font-size:80%;\">35</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.5.3.1\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.1.1\" style=\"font-size:80%;\">CIF-T(M)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.3.2\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.2.1\" style=\"font-size:80%;\">15;2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.3.3\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.3.1\" style=\"font-size:80%;\">256</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.3.4\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.4.1\" style=\"font-size:80%;\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.3.5\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.5.1\" style=\"font-size:80%;\">2048</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.3.6\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.3.6.1\" style=\"font-size:80%;\">50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.5.4.1\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.1.1\" style=\"font-size:80%;\">CIF-T(L)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.5.4.2\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.2.1\" style=\"font-size:80%;\">16;2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.5.4.3\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.3.1\" style=\"font-size:80%;\">512</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.5.4.4\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.4.1\" style=\"font-size:80%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.5.4.5\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.5.1\" style=\"font-size:80%;\">2048</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T1.5.4.6\" style=\"padding-left:4.6pt;padding-right:4.6pt;\"><span class=\"ltx_text\" id=\"S3.T1.5.4.6.1\" style=\"font-size:80%;\">130</span></td>\n</tr>\n</table>\n</figure>",
88
+ "capture": "Table 1: Encoder hyper-parameters for CIF-T of S, M, and L."
89
+ },
90
+ "2": {
91
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.18.1.1\">Table 2</span>: </span>Results on the AISHELL-1 dataset. denotes the best results in the paper. represents the vanilla model without improvements.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T2.12\">\n<tr class=\"ltx_tr\" id=\"S3.T2.12.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T2.12.9.1\" rowspan=\"2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.9.1.1\" style=\"font-size:80%;\">Models</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.12.9.2\" rowspan=\"2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.9.2.1\" style=\"font-size:80%;\">Size (M)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T2.12.9.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.9.3.1\" style=\"font-size:80%;\">CER (%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.10.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.12.10.1.1\"></span><span class=\"ltx_text\" id=\"S3.T2.12.10.1.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T2.12.10.1.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.12.10.1.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T2.12.10.1.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.12.10.1.3.1.1.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">Dev</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T2.12.10.1.4\"></span><span class=\"ltx_text\" id=\"S3.T2.12.10.1.5\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.10.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.12.10.2.1\"></span><span class=\"ltx_text\" id=\"S3.T2.12.10.2.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T2.12.10.2.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.12.10.2.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T2.12.10.2.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.12.10.2.3.1.1.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">Test</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T2.12.10.2.4\"></span><span class=\"ltx_text\" id=\"S3.T2.12.10.2.5\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.12.11.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.11.1.1\" style=\"font-size:80%;\">Conformer AED</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T2.12.11.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T2.12.11.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T2.12.11.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.5.1.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.5.1.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\n\u2003ESPnet</span><span class=\"ltx_note ltx_role_footnote\" id=\"footnote1\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">1</sup><span class=\"ltx_tag ltx_tag_note\">1</span>https://github.com/espnet/espnet/tree/master</span></span></span><span class=\"ltx_text\" id=\"S3.T2.5.1.1.2\" style=\"font-size:80%;\"> </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.5.1.1.3.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib24\" title=\"\">24</a><span class=\"ltx_text\" id=\"S3.T2.5.1.1.4.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.5.1.2.1\" style=\"font-size:80%;\">46</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.5.1.3.1\" style=\"font-size:80%;\">4.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.5.1.4.1\" style=\"font-size:80%;\">4.6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.6.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.6.2.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.6.2.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003Wenet</span><span class=\"ltx_note ltx_role_footnote\" id=\"footnote2\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">2</sup><span class=\"ltx_tag ltx_tag_note\">2</span>https://github.com/wenet-e2e/wenet/tree/main</span></span></span><span class=\"ltx_text\" id=\"S3.T2.6.2.1.2\" style=\"font-size:80%;\"> </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.6.2.1.3.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib25\" title=\"\">25</a><span class=\"ltx_text\" id=\"S3.T2.6.2.1.4.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.6.2.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.6.2.2.1\" style=\"font-size:80%;\">47</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.6.2.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.6.2.3.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.6.2.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.6.2.4.1\" style=\"font-size:80%;\">4.6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.12.12.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.12.1.1\" style=\"font-size:80%;\">CIF based</span></td>\n<td class=\"ltx_td\" id=\"S3.T2.12.12.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S3.T2.12.12.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S3.T2.12.12.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.12.13.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.12.13.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003Conformer-CIF </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.12.13.1.2.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib16\" title=\"\">16</a><span class=\"ltx_text\" id=\"S3.T2.12.13.1.3.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.13.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.13.2.1\" style=\"font-size:80%;\">45</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.13.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.13.3.1\" style=\"font-size:80%;\">4.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.13.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.13.4.1\" style=\"font-size:80%;\">5.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.7.3.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.7.3.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003Conformer-CIF</span><sup class=\"ltx_sup\" id=\"S3.T2.7.3.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.7.3.1.2.1\" style=\"font-size:80%;\">\u2020</span></sup><span class=\"ltx_text\" id=\"S3.T2.7.3.1.3\" style=\"font-size:80%;\"> </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.7.3.1.4.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib16\" title=\"\">16</a><span class=\"ltx_text\" id=\"S3.T2.7.3.1.5.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.7.3.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.7.3.2.1\" style=\"font-size:80%;\">55</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.7.3.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.7.3.3.1\" style=\"font-size:80%;\">4.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.7.3.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.7.3.4.1\" style=\"font-size:80%;\">4.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.8.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.8.4.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.8.4.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003Paraformer</span><span class=\"ltx_note ltx_role_footnote\" id=\"footnote3\"><sup class=\"ltx_note_mark\">3</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">3</sup><span class=\"ltx_tag ltx_tag_note\">3</span>https://modelscope.cn/models/damo/speech_paraformer_asr_nat-zh-cn-16k-aishell1-vocab4234-pytorch</span></span></span><span class=\"ltx_text\" id=\"S3.T2.8.4.1.2\" style=\"font-size:80%;\"> </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.8.4.1.3.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib17\" title=\"\">17</a><span class=\"ltx_text\" id=\"S3.T2.8.4.1.4.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.8.4.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.8.4.2.1\" style=\"font-size:80%;\">46</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.8.4.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.8.4.3.1\" style=\"font-size:80%;\">4.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.8.4.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.8.4.4.1\" style=\"font-size:80%;\">5.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.12.14.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.14.1.1\" style=\"font-size:80%;\">RNN-T based</span></td>\n<td class=\"ltx_td\" id=\"S3.T2.12.14.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S3.T2.12.14.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n<td class=\"ltx_td\" id=\"S3.T2.12.14.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.9.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.9.5.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.9.5.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003K2</span><span class=\"ltx_note ltx_role_footnote\" id=\"footnote4\"><sup class=\"ltx_note_mark\">4</sup><span class=\"ltx_note_outer\"><span class=\"ltx_note_content\"><sup class=\"ltx_note_mark\">4</sup><span class=\"ltx_tag ltx_tag_note\">4</span>https://github.com/k2-fsa/icefall/tree/master</span></span></span><span class=\"ltx_text\" id=\"S3.T2.9.5.1.2\" style=\"font-size:80%;\"> </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.9.5.1.3.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib5\" title=\"\">5</a><span class=\"ltx_text\" id=\"S3.T2.9.5.1.4.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.9.5.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.9.5.2.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.9.5.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.9.5.3.1\" style=\"font-size:80%;\">4.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.9.5.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.9.5.4.1\" style=\"font-size:80%;\">5.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.12.15.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.12.15.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003ESPnet</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#footnote1\" style=\"font-size:80%;\" title=\"footnote 1 \u2023 Table 2 \u2023 3.2 Unified Gating and Bilinear Pooling Joint Network \u2023 3 CIF-Transducer \u2023 CIF-T: A Novel CIF-based Transducer Architecture for Automatic Speech Recognition\"><span class=\"ltx_text ltx_ref_tag\">1</span></a><span class=\"ltx_text\" id=\"S3.T2.12.15.1.2\" style=\"font-size:80%;\"> </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.12.15.1.3.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib24\" title=\"\">24</a><span class=\"ltx_text\" id=\"S3.T2.12.15.1.4.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.15.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.15.2.1\" style=\"font-size:80%;\">35</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.15.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.15.3.1\" style=\"font-size:80%;\">4.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.15.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.15.4.1\" style=\"font-size:80%;\">4.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.12.16.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.12.16.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003Wenet</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#footnote2\" style=\"font-size:80%;\" title=\"footnote 2 \u2023 Table 2 \u2023 3.2 Unified Gating and Bilinear Pooling Joint Network \u2023 3 CIF-Transducer \u2023 CIF-T: A Novel CIF-based Transducer Architecture for Automatic Speech Recognition\"><span class=\"ltx_text ltx_ref_tag\">2</span></a><span class=\"ltx_text\" id=\"S3.T2.12.16.1.2\" style=\"font-size:80%;\"> </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.12.16.1.3.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib25\" title=\"\">25</a><span class=\"ltx_text\" id=\"S3.T2.12.16.1.4.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.16.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.16.2.1\" style=\"font-size:80%;\">53</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.16.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.16.3.1\" style=\"font-size:80%;\">-</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.16.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.16.4.1\" style=\"font-size:80%;\">4.5</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.12.17.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.17.1.1\" style=\"font-size:80%;\">RNN-T (ours)</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T2.12.17.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T2.12.17.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T2.12.17.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.10.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.10.6.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.10.6.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003RNN-T</span><sup class=\"ltx_sup\" id=\"S3.T2.10.6.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.10.6.1.2.1\" style=\"font-size:80%;\">\u2021</span></sup>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.10.6.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.10.6.2.1\" style=\"font-size:80%;\">35</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.10.6.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.10.6.3.1\" style=\"font-size:80%;\">4.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.10.6.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.10.6.4.1\" style=\"font-size:80%;\">5.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.12.18.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.18.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003\u2003+ UGBP</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.18.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.18.2.1\" style=\"font-size:80%;\">38</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.18.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.18.3.1\" style=\"font-size:80%;\">4.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.18.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.18.4.1\" style=\"font-size:80%;\">5.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.19\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T2.12.19.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.19.1.1\" style=\"font-size:80%;\">CIF-T (ours)</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T2.12.19.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T2.12.19.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T2.12.19.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.12.8.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\">\n<span class=\"ltx_text\" id=\"S3.T2.12.8.2.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003CIF-T</span><sup class=\"ltx_sup\" id=\"S3.T2.12.8.2.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.12.8.2.2.1\" style=\"font-size:80%;\">\u2021</span></sup><span class=\"ltx_text\" id=\"S3.T2.12.8.2.3\" style=\"font-size:80%;\">(RNN-T</span><sup class=\"ltx_sup\" id=\"S3.T2.12.8.2.4\"><span class=\"ltx_text ltx_font_italic\" id=\"S3.T2.12.8.2.4.1\" style=\"font-size:80%;\">\u2021</span></sup><span class=\"ltx_text\" id=\"S3.T2.12.8.2.5\" style=\"font-size:80%;\"> + CIF)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.8.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.8.3.1\" style=\"font-size:80%;\">35</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.8.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.8.4.1\" style=\"font-size:80%;\">4.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.8.5\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.8.5.1\" style=\"font-size:80%;\">5.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.20\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.12.20.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.20.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003CIF-T(S)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.20.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.20.2.1\" style=\"font-size:80%;\">35</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.20.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.20.3.1\" style=\"font-size:80%;\">4.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.20.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.20.4.1\" style=\"font-size:80%;\">4.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.21\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T2.12.21.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.21.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003CIF-T(M)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.21.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.21.2.1\" style=\"font-size:80%;\">50</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.21.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.21.3.1\" style=\"font-size:80%;\">4.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.12.21.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.21.4.1\" style=\"font-size:80%;\">4.5</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.22\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T2.12.22.1\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.22.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003CIF-T(L)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.12.22.2\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text\" id=\"S3.T2.12.22.2.1\" style=\"font-size:80%;\">130</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.12.22.3\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.22.3.1\" style=\"font-size:80%;\">4.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.12.22.4\" style=\"padding-left:10.5pt;padding-right:10.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.22.4.1\" style=\"font-size:80%;\">4.3</span></td>\n</tr>\n</table>\n</figure>",
92
+ "capture": "Table 2: Results on the AISHELL-1 dataset. denotes the best results in the paper. represents the vanilla model without improvements."
93
+ },
94
+ "3": {
95
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.4.1.1\">Table 3</span>: </span>Results on the WenetSpeech dataset.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T3.5\">\n<tr class=\"ltx_tr\" id=\"S3.T3.5.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T3.5.1.1\" rowspan=\"2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.5.1.1.1\" style=\"font-size:80%;\">Models</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.5.1.2\" rowspan=\"2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.5.1.2.1\" style=\"font-size:80%;\">Size (M)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S3.T3.5.1.3\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.5.1.3.1\" style=\"font-size:80%;\">CER (%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.5.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.2.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">\n<span class=\"ltx_text\" id=\"S3.T3.5.2.1.1\"></span><span class=\"ltx_text\" id=\"S3.T3.5.2.1.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T3.5.2.1.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.5.2.1.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T3.5.2.1.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.5.2.1.3.1.1.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">Dev</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T3.5.2.1.4\"></span><span class=\"ltx_text\" id=\"S3.T3.5.2.1.5\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.2.2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">\n<span class=\"ltx_text\" id=\"S3.T3.5.2.2.1\"></span><span class=\"ltx_text\" id=\"S3.T3.5.2.2.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T3.5.2.2.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.5.2.2.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T3.5.2.2.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.5.2.2.3.1.1.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">Test_Net</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T3.5.2.2.4\"></span><span class=\"ltx_text\" id=\"S3.T3.5.2.2.5\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.2.3\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">\n<span class=\"ltx_text\" id=\"S3.T3.5.2.3.1\"></span><span class=\"ltx_text\" id=\"S3.T3.5.2.3.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T3.5.2.3.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T3.5.2.3.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T3.5.2.3.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T3.5.2.3.3.1.1.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">Test_Meeting</span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T3.5.2.3.4\"></span><span class=\"ltx_text\" id=\"S3.T3.5.2.3.5\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.5.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T3.5.3.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">\n<span class=\"ltx_text\" id=\"S3.T3.5.3.1.1\" style=\"font-size:80%;\">ESPnet </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T3.5.3.1.2.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib26\" title=\"\">26</a><span class=\"ltx_text\" id=\"S3.T3.5.3.1.3.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.5.3.2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.3.2.1\" style=\"font-size:80%;\">117</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.5.3.3\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.3.3.1\" style=\"font-size:80%;\">9.70</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.5.3.4\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.3.4.1\" style=\"font-size:80%;\">8.90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.5.3.5\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.3.5.1\" style=\"font-size:80%;\">15.90</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.5.4.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">\n<span class=\"ltx_text\" id=\"S3.T3.5.4.1.1\" style=\"font-size:80%;\">Wenet </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T3.5.4.1.2.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib26\" title=\"\">26</a><span class=\"ltx_text\" id=\"S3.T3.5.4.1.3.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.4.2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.4.2.1\" style=\"font-size:80%;\">123</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.4.3\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.4.3.1\" style=\"font-size:80%;\">8.88</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.4.4\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.4.4.1\" style=\"font-size:80%;\">9.70</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.4.5\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.4.5.1\" style=\"font-size:80%;\">15.59</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T3.5.5.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">\n<span class=\"ltx_text\" id=\"S3.T3.5.5.1.1\" style=\"font-size:80%;\">Conformer-MoE </span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T3.5.5.1.2.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2307.14132v4#bib.bib27\" title=\"\">27</a><span class=\"ltx_text\" id=\"S3.T3.5.5.1.3.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.5.2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.5.2.1\" style=\"font-size:80%;\">425</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.5.3\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.5.3.1\" style=\"font-size:80%;\">7.67</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.5.4\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.5.4.1\" style=\"font-size:80%;\">8.28</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.5.5.5\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.5.5.1\" style=\"font-size:80%;\">13.96</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.5.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T3.5.6.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.6.1.1\" style=\"font-size:80%;\">CIF-T (L)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.5.6.2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text\" id=\"S3.T3.5.6.2.1\" style=\"font-size:80%;\">130</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.5.6.3\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.5.6.3.1\" style=\"font-size:80%;\">7.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.5.6.4\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.5.6.4.1\" style=\"font-size:80%;\">8.73</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T3.5.6.5\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.5.6.5.1\" style=\"font-size:80%;\">14.12</span></td>\n</tr>\n</table>\n</figure>",
96
+ "capture": "Table 3: Results on the WenetSpeech dataset."
97
+ },
98
+ "4": {
99
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.4.1.1\">Table 4</span>: </span>Comparison of the maximum batch size trained with RNN-T and CIF-T on a single GPU.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T4.5\">\n<tr class=\"ltx_tr\" id=\"S3.T4.5.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T4.5.1.1\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.5.1.1.1\" style=\"font-size:80%;\">Batch Size</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.5.1.2\" style=\"padding-left:7.4pt;padding-right:7.4pt;\">\n<span class=\"ltx_text\" id=\"S3.T4.5.1.2.1\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.2.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T4.5.1.2.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.5.1.2.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.5.1.2.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T4.5.1.2.3.1.1.1\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.5.1.2.3.1.1.1.1\">8</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T4.5.1.2.4\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.2.5\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.5.1.3\" style=\"padding-left:7.4pt;padding-right:7.4pt;\">\n<span class=\"ltx_text\" id=\"S3.T4.5.1.3.1\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.3.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T4.5.1.3.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.5.1.3.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.5.1.3.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T4.5.1.3.3.1.1.1\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.5.1.3.3.1.1.1.1\">16</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T4.5.1.3.4\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.3.5\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.5.1.4\" style=\"padding-left:7.4pt;padding-right:7.4pt;\">\n<span class=\"ltx_text\" id=\"S3.T4.5.1.4.1\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.4.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T4.5.1.4.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.5.1.4.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.5.1.4.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T4.5.1.4.3.1.1.1\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.5.1.4.3.1.1.1.1\">32</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T4.5.1.4.4\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.4.5\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.5.1.5\" style=\"padding-left:7.4pt;padding-right:7.4pt;\">\n<span class=\"ltx_text\" id=\"S3.T4.5.1.5.1\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.5.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T4.5.1.5.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.5.1.5.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.5.1.5.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T4.5.1.5.3.1.1.1\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.5.1.5.3.1.1.1.1\">48</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T4.5.1.5.4\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.5.5\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.5.1.6\" style=\"padding-left:7.4pt;padding-right:7.4pt;\">\n<span class=\"ltx_text\" id=\"S3.T4.5.1.6.1\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.6.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T4.5.1.6.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.5.1.6.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.5.1.6.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T4.5.1.6.3.1.1.1\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.5.1.6.3.1.1.1.1\">72</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T4.5.1.6.4\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.6.5\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.5.1.7\" style=\"padding-left:7.4pt;padding-right:7.4pt;\">\n<span class=\"ltx_text\" id=\"S3.T4.5.1.7.1\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.7.2\" style=\"font-size:80%;\"> </span><span class=\"ltx_text\" id=\"S3.T4.5.1.7.3\" style=\"font-size:80%;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.5.1.7.3.1\">\n<span class=\"ltx_tr\" id=\"S3.T4.5.1.7.3.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T4.5.1.7.3.1.1.1\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.5.1.7.3.1.1.1.1\">96</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S3.T4.5.1.7.4\"></span><span class=\"ltx_text\" id=\"S3.T4.5.1.7.5\" style=\"font-size:80%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.5.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T4.5.2.1\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.2.1.1\" style=\"font-size:80%;\">RNN-T (Torchaudio)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.5.2.2\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.2.2.1\" style=\"font-size:80%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.5.2.3\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.2.3.1\" style=\"font-size:80%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.5.2.4\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.2.4.1\" style=\"font-size:80%;\">\u2717</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T4.5.2.5\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T4.5.2.6\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T4.5.2.7\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.5.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T4.5.3.1\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.3.1.1\" style=\"font-size:80%;\">CIF-T</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.5.3.2\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.3.2.1\" style=\"font-size:80%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.5.3.3\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.3.3.1\" style=\"font-size:80%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.5.3.4\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.3.4.1\" style=\"font-size:80%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.5.3.5\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.3.5.1\" style=\"font-size:80%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.5.3.6\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.3.6.1\" style=\"font-size:80%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.5.3.7\" style=\"padding-left:7.4pt;padding-right:7.4pt;\"><span class=\"ltx_text\" id=\"S3.T4.5.3.7.1\" style=\"font-size:80%;\">\u2717</span></td>\n</tr>\n</table>\n</figure>",
100
+ "capture": "Table 4: Comparison of the maximum batch size trained with RNN-T and CIF-T on a single GPU."
101
+ },
102
+ "5": {
103
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.5.1.1\">Table 5</span>: </span>Ablation study of the UGBP joint network, Funnel-CIF, and Context Blocks for CIF-T.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T5.1\">\n<tr class=\"ltx_tr\" id=\"S4.T5.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T5.1.2.1\" style=\"padding-left:11.4pt;padding-right:11.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.2.1.1\" style=\"font-size:80%;\">Models</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.1.2.2\" style=\"padding-left:11.4pt;padding-right:11.4pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.1.2.2.1\" style=\"font-size:80%;\">CER</span><span class=\"ltx_text\" id=\"S4.T5.1.2.2.2\" style=\"font-size:80%;\">(%)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T5.1.1.1\" style=\"padding-left:11.4pt;padding-right:11.4pt;\">\n<span class=\"ltx_text\" id=\"S4.T5.1.1.1.1\" style=\"font-size:80%;\">CIF-T</span><sup class=\"ltx_sup\" id=\"S4.T5.1.1.1.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T5.1.1.1.2.1\" style=\"font-size:80%;\">\u2021</span></sup>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2\" style=\"padding-left:11.4pt;padding-right:11.4pt;\"><span class=\"ltx_text\" id=\"S4.T5.1.1.2.1\" style=\"font-size:80%;\">5.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T5.1.3.1\" style=\"padding-left:11.4pt;padding-right:11.4pt;\"><span class=\"ltx_text\" id=\"S4.T5.1.3.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003+ UGBP</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.3.2\" style=\"padding-left:11.4pt;padding-right:11.4pt;\"><span class=\"ltx_text\" id=\"S4.T5.1.3.2.1\" style=\"font-size:80%;\">5.2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T5.1.4.1\" style=\"padding-left:11.4pt;padding-right:11.4pt;\"><span class=\"ltx_text\" id=\"S4.T5.1.4.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003\u2003+ Funnel-CIF</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.4.2\" style=\"padding-left:11.4pt;padding-right:11.4pt;\"><span class=\"ltx_text\" id=\"S4.T5.1.4.2.1\" style=\"font-size:80%;\">5.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T5.1.5.1\" style=\"padding-left:11.4pt;padding-right:11.4pt;\"><span class=\"ltx_text\" id=\"S4.T5.1.5.1.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u2003\u2003\u2003+ Context Blocks</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.1.5.2\" style=\"padding-left:11.4pt;padding-right:11.4pt;\"><span class=\"ltx_text\" id=\"S4.T5.1.5.2.1\" style=\"font-size:80%;\">4.8</span></td>\n</tr>\n</table>\n</figure>",
104
+ "capture": "Table 5: Ablation study of the UGBP joint network, Funnel-CIF, and Context Blocks for CIF-T."
105
+ },
106
+ "6": {
107
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.1\">Table 6</span>: </span>Influence of re-initializing the predictor network of fully trainable RNN-T and CIF-T.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T6.5\">\n<tr class=\"ltx_tr\" id=\"S4.T6.5.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T6.5.1.1\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.5.1.1.1\" style=\"font-size:80%;\">Models</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.5.1.2\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.5.1.2.1\" style=\"font-size:80%;\">Re-Init Pred.</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T6.5.1.3\" style=\"padding:-0.4pt 11.4pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.5.1.3.1\" style=\"font-size:80%;\">CER</span><span class=\"ltx_text\" id=\"S4.T6.5.1.3.2\" style=\"font-size:80%;\">(%)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.5.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T6.5.2.1\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.2.1.1\" style=\"font-size:80%;\">RNN-T</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.5.2.2\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.2.2.1\" style=\"font-size:80%;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T6.5.2.3\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.2.3.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u20035.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.5.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T6.5.3.1\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.3.1.1\" style=\"font-size:80%;\">RNN-T</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.5.3.2\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.3.2.1\" style=\"font-size:80%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T6.5.3.3\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.3.3.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u20035.4 (+0.4)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T6.5.4.1\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.4.1.1\" style=\"font-size:80%;\">CIF-T</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.5.4.2\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.4.2.1\" style=\"font-size:80%;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T6.5.4.3\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.4.3.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u20034.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T6.5.5.1\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.5.1.1\" style=\"font-size:80%;\">CIF-T</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.5.5.2\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.5.2.1\" style=\"font-size:80%;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T6.5.5.3\" style=\"padding:-0.4pt 11.4pt;\"><span class=\"ltx_text\" id=\"S4.T6.5.5.3.1\" style=\"font-size:80%;\">\u00a0\u00a0\u00a0\u00a0\u20036.7 (+1.9)</span></td>\n</tr>\n</table>\n</figure>",
108
+ "capture": "Table 6: Influence of re-initializing the predictor network of fully trainable RNN-T and CIF-T."
109
+ }
110
+ },
111
+ "image_paths": {
112
+ "1": {
113
+ "figure_path": "2307.14132v4_figure_1.png",
114
+ "caption": "Fig. 1: The different aggregation processes of acoustic features between RNN-T and CIF. The RNN-T emits special symbols b\u2062l\u2062a\u2062n\u2062k\ud835\udc4f\ud835\udc59\ud835\udc4e\ud835\udc5b\ud835\udc58blankitalic_b italic_l italic_a italic_n italic_k for the alignment process, while CIF aggregates the weighted \u03b1\ud835\udefc\\alphaitalic_\u03b1 of acoustic features.",
115
+ "url": "http://arxiv.org/html/2307.14132v4/x1.png"
116
+ },
117
+ "2": {
118
+ "figure_path": "2307.14132v4_figure_2.png",
119
+ "caption": "Fig. 2: The structure of the proposed CIF-Transducer and Funnel-CIF. The dashed boxes in Fig. (a) represent the modules used only for the training process. FC and Conv stand for fully connected layer and convolutional layer, respectively.",
120
+ "url": "http://arxiv.org/html/2307.14132v4/x2.png"
121
+ }
122
+ },
123
+ "validation": true,
124
+ "references": [],
125
+ "url": "http://arxiv.org/html/2307.14132v4"
126
+ }
20241127/2310.01522v3.json ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Property-preserving numerical approximation of a Cahn\u2013Hilliard\u2013Navier\u2013Stokes model with variable density and degenerate mobility",
3
+ "abstract": "In this paper, we present a new computational framework to approximate a Cahn\u2013Hilliard\u2013Navier\u2013Stokes model with variable density and degenerate mobility that preserves the mass of the mixture, the pointwise bounds of the density and the decreasing energy.\nThis numerical scheme is based on a finite element approximation for the\nNavier\u2013Stokes fluid flow with discontinuous pressure and an upwind discontinuous Galerkin scheme for the Cahn\u2013Hilliard part.\nFinally, several numerical experiments such as a convergence test and some well-known benchmark problems are conducted.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Hydrodynamics has been considered a research field of increasing interest among the scientific community during the last few decades. In this sense, diffuse interface models were proposed as a successful alternative to model fluid-solid interaction after van der Waals introduced the foundations in the pioneering paper [van1879thermodynamic]. Afterwards, these ideas were extended to fluid mixture and several works were published in this regard. In particular, both Hohelberg and Halpering, [hohenberg1977theory], and Gurtin et al., [gurtin1996two], arrived by different approaches to the same model, the well-known Model H, which would lead to the\nCahn\u2013Hilliard\u2013Navier\u2013Stokes (CHNS) system.\nSince then, many different CHNS models have been developed using different techniques and extended to the case of fluids with different densities, see the model by Boyer [boyer2002theoretical] or by Ding et al. [ding2007diffuse]. Moreover, several of these recent models satisfy\nsome laws of thermodynamics.\nThis is the case for the model by Lowengrub and Truskinovsky, [lowengrub1998quasi], or the one by Abels et al., [abels_thermodynamically_2011], which introduces an extra convective term in the momentum equation due to the different densities of the fluids. In [kim_2012] a careful revision of several CHNS models and their applications is provided. Also, recently, a very interesting survey has been published, [ten2023unified], in which the authors, Eikelder et al., discuss different existing well-known CHNS models analyzing their advantages and disadvantages from a physical point of view. In fact, the authors of [ten2023unified] provide some notions on\nproperties a CHNS model has to satisfy in order to be physically consistent.\nOne characteristic that many of these models share is that the density of the mixture is usually interpolated as a linear function of the phase-field function. Hence, ensuring the pointwise bounds for this phase-field function in the Cahn-Hilliard equation, for instance, by using a degenerate mobility (see [acosta-soba_upwind_2022]) is crucial to ensure a physically consistent model. Also, CHNS models conserve the total mass of the mixture and, as mentioned above, they tend to be thermodynamically consistent in the sense that the solutions of these models usually minimize an underlying energy law. Therefore, as these properties are extremely important for the physical meaning of the models it is likewise important to preserve them when approximating their solutions.\nHowever, the transport of the diffuse interface by the velocity of the fluid is typically modeled by means of a convective term that is introduced into the Cahn-Hilliard equation and, as shown in previous studies such as [acosta-soba_upwind_2022], this term may lead to numerical instabilities in highly convective regimes if it is not treated carefully. The instabilities result in nonphysical spurious oscillations that make the approximation of the phase-field variable lose the pointwise bounds. In this regard, removing the numerical instabilities in the case of the convective Cahn-Hilliard model has been an object of study in several recent works, see [frank2018finite] or [acosta-soba_upwind_2022], where in the latter the authors enforce the pointwise bounds by means of a discontinuous Galerkin (DG) upwind technique. Different ideas such as the use of limiters have been used in the case of the CHNS systems. For instance, in [liu2022pressure], the authors\ndeveloped, by means of flux and slope limiters, a bound-preserving decoupled approximation of a CHNS simplified system with constant mobility. Later, the same model was approximated\nby high order polynomials using a decoupled scheme and a convex optimization technique with a scaling limiter to ensure the pointwise bounds, see [liu2023simple]. In this line, the recent work [guillentierra2024] has presented a numerical approximation of a CHNS which is mass conservative, energy stable and approximately pointwise bounded.\nIn addition, designing an approximation that satisfies a discrete version of the continuous energy in the diffuse-interface models is not straightforward and usually requires the use of specific time-discrete approximations such as the standard convex-splitting technique, [eyre_1998_unconditionally], or the more recently developed SAV approach, [shen2018scalar].\nIn this sense, several advancements have been made towards the approximation of the CHNS models preserving the energy-stability constraint. For instance, we can find the work [tierra_guillen_abels_2014] where the authors propose an approximation of the model in [abels_thermodynamically_2011] that decouples the phase-field equations from the fluid equations through a modified velocity. This approach was further studied in [grun_guillen-gonzalez_metzger_2016] and extended to a fully decoupled approximation that uses a pressure correction approach, [shen2015decoupled]. Other fractional time-stepping energy-stable discretizations of CHNS models can be found in [salgado2013diffuse, deteix2022new, liu2015decoupled].\nNevertheless, although it has been achieved in the case of a CHNS with a Flory-Huggins logarithmic potential (see [chen2022positivity]), to our best knowledge there is no published work on an approximation of a CHNS model with a Ginzburg-Landau polynomial potential and degenerate mobility that ensures both the mass-conservation, pointwise bounds and energy-stability properties.\nTo address this challenge,\nin this work, we provide an upwind DG approximation of the model by Abels et al. [abels_thermodynamically_2011] where all the mass-conservation, the pointwise bounds and the energy-stability properties are preserved.\nFirstly,\nin Section 2 ###reference_### we introduce the CHNS model that we are going to consider and we present its properties.\nThen, in Section 3 ###reference_### we develop the structure-preserving approximation of the aforementioned model, showing that it satisfies all the mass-conservation, pointwise bounds and energy-stability properties. Finally, in Section 4 ###reference_### we conduct several numerical experiments. First, we compute a preliminary accuracy test in Subsection 4.1 ###reference_### for all the variables in both and norms. Then, we provide a simple test where two bubbles are mixed in Subsection 4.2 ###reference_###. The results are in accordance with the previous theoretical analysis. Finally, in Subsections 4.3 ###reference_### and 4.4 ###reference_### we couple the CHNS system with a term modeling the action of gravitational forces and conduct two benchmark tests: a heavier bubble in a lighter medium and a Rayleigh-Taylor type instability."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Cahn\u2013Hilliard\u2013Navier\u2013Stokes model",
15
+ "text": "Let be a bounded polygonal domain. We consider a mixture of two fluids with different densities and introduce a phase-field function such that corresponds with fluid of density , with fluid of density and in the interface between the two fluids. Then,\nthe diffuse-interface Cahn\u2013Hilliard\u2013Navier\u2013Stokes model proposed by Abels et al. in [abels_thermodynamically_2011] and further numerically studied in [tierra_guillen_abels_2014, grun_guillen-gonzalez_metzger_2016, shen2015decoupled], can be written as follows:\nHere, and are the mean velocity and the pressure of the fluid respectively, and is the chemical potential related to the phase-field function .\nAlso, is the strain tensor, is the derivative of the Ginzburg-Landau double well potential , i.e. , is the degenerate (truncated) mobility function and\nis the extra-convective term due to different densities.\nMoreover, the density of the mixture depending on the phase-field variable , can be defined either as the solution of the mass balance equation\nor, by taking into account the equation (1c ###reference_3###), as the explicit relation\nWe have written the equation (2 ###reference_###) in its more general variational formulation since does not necessarily belong to . It is clear from (3 ###reference_###) that in is equivalent to in . Consequently, it is important the constraint to preserve the physical meaning of the model because the density of the mixture must satisfy .\nFinally, with for certain and for all is the viscosity of the mixture, is a constant related to the energy density and is a small parameter related to the thickness of the interface between the two fluids.\nSince if is a pressure function solution of (1 ###reference_###) then is also solution for any time-dependent function , it is usual to consider the zero mean-value pressure constraint .\nWe can consider the following variational formulation of problem (1 ###reference_###):\nFind such that ,\n with ,\n\nwith a.e. in , with , satisfying\nfor each .\nWe have denoted as the scalar product and\nwhere denotes the Frobenius inner product.\nThe mass of the phase-field variable is conserved, because it holds\nIn particular, the mass of the mixture is conserved, because using (3 ###reference_###),\nJust test (4c ###reference_3###) by .\n\u220e\nAssuming a sufficiently regular solution of (4 ###reference_x1###)-(4d ###reference_4###), the following energy law holds:\nwhere , with denoting the -th row of the stress tensor , and\nwhere the first term is associated to the kinetic energy and the others to the potential energy. In particular,\nthe energy is time decreasing because\nWe argue formally, by considering that all the functions that appear below are regular enough so that the expressions are true. Moreover, they are regarded as functions to be evaluated at , although, for clarity, we will omit it.\nIf we test (4 ###reference_x1###)\u2013(4d ###reference_4###) by , , and and we add up the expressions, we obtain:\nNow, testing (2 ###reference_###) by , we have\nBy adding the two previous expressions, the convective term cancels.\nHence, taking into account that\nwe can conclude that the energy law (5 ###reference_###) holds.\n\u220e"
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Structure-preserving scheme",
21
+ "text": "In this section we develop a fully coupled discretization of the model (1 ###reference_###) that preserves all properties at the discrete level, including the mass conservation, pointwise bounds of the phase-field and density of the mixture variables, and the decreasing of the energy (also called energy-stability)."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Notation",
27
+ "text": "We consider a finite element shape-regular triangular mesh in the sense of Ciarlet, [ciarlet2002finite], of size over . We denote by the set of the edges of (faces if ) with the set of the interior edges and the boundary edges, i.e. .\nNow, we fix the following orientation over the mesh :\nFor any interior edge we set the associated unit normal vector . In this sense, when referring to edge we will denote by and the elements of with and so that is exterior to pointing to .\nIf there is no ambiguity, to abbreviate the notation we will denote the previous elements and simply by and , respectively, with the assumption that their naming is always with respect to the edge and it may vary if we consider a different edge of .\nFor any boundary edge , the unit normal vector points outwards of the domain .\nTherefore, we can define the average and the jump of a function on an edge as follows:\nWe denote by and the spaces of finite element discontinuous and continuous functions, respectively, which are polynomials of degree when restricted to the elements of .\nIn this sense, we will denote the broken differential operators (see [riviere_discontinuous_2008, di_pietro_mathematical_2012]) the same way as the standard differential operators in the absence of ambiguity.\nMoreover, we take an equispaced partition of the time domain with the time step. Also, for any function depending on time, we denote\n and the discrete time derivative operator .\nFinally, we set the following notation for the positive and negative parts of a function :"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Discrete scheme",
33
+ "text": "Following the ideas of [acosta-soba_upwind_2022, acosta-soba_KS_2022, acosta2023structure] we define the projections , and\n\nas follows:\nwhere denotes the usual scalar product in . In addition, denotes the mass-lumping scalar product in resulting from using the trapezoidal rule to approximate the scalar product in (see, for instance, [quarteroni2008numerical]). Therefore, for any elements this scalar product can be defined as\nwhere are the nodes of the element for every . These projections (7 ###reference_###)\u2013(9 ###reference_###) are well defined for every function , notice that imply for every and, therefore, .\nWe propose the following numerical scheme: find , \nwith , and such that\nfor each , , , ,\nwhere\nand\nsuch that is a convex splitting discretization of the Ginzburg-Landau double well potential for any .\nAlso, is a compatible \u201cinf-sup\u201d pair of finite-dimensional spaces satisfying that and .\nIn fact, the restriction is needed in order to guarantee the local incompressibility of in the following sense:\nwhich can be derived integrating by parts in (10b ###reference_.2###).\nThis constraint will allow us to preserve the pointwise bounds of , see Theorem 3.5 ###reference_theorem5### below. Notice that the discretization of the pressure and the divergence term (10b ###reference_.2###) is the standard Stokes DG approach [riviere_discontinuous_2008, di_pietro_mathematical_2012] for continuous velocity and discontinuous pressure.\nSome possible choices of compatible spaces are the following (see [boffi2013mixed, ern_theory_2010] for the details):\nwhich is stable for but not for .\nwhich is stable for but requires a higher computational effort. Here, denotes the space enriched with a bubble by elements of order 3.\n. Here, denotes the standard quadrilateral finite element space of order 2.\nNotice that, for any choice of this pair , the error bounds are expected to be determined by the lowest accuracy approximation\nof the phase-field function by .\nMoreover,\n is a centered discretization of the term in (4 ###reference_x1###)\ndefined as\nwhere\nthe second term is a consistent stabilization term depending on the jumps of on the interior edges of the mesh .\nIn (10c ###reference_.3###) we have considered two different upwind formulas, the classical upwind\nwhose properties were discussed in [acosta-soba_upwind_2022], and\nwhich follows the ideas introduced in [acosta-soba_KS_2022, acosta2023structure], and which will be detailed in the Subsection 3.2.1 ###reference_.SSS1###.\nFinally, we have introduced in (10 ###reference_.x1###) two consistent stabilizations terms:\nwhich, following the ideas of [tierra_guillen_abels_2014], can be interpreted as a residual to the equation (2 ###reference_###); and\nwhich\nis introduced to control the influence of the upwind term in (10c ###reference_.3###). This latter stabilization together with the centered approximation\n\nof the phase-field force in the momentum equation (10 ###reference_.x1###), cancel the effect of the transport of the phase-field function by the mean velocity and allow us to obtain a discrete energy inequality, see Lemma 3.7 ###reference_theorem7### below.\nTo start the algorithm we take where is the continuous initial data, which satisfies . Notice that, one also has .\nObserve that the -mean value constraint on the pressure has been removed from the discrete formulation (10 ###reference_###). This constraint will be enforced\nin practice by using an additional penalty term, see Section 4 ###reference_### below."
34
+ },
35
+ {
36
+ "section_id": "3.2.1",
37
+ "parent_section_id": "3.2",
38
+ "section_name": "3.2.1 Definition of the upwind bilinear form",
39
+ "text": "In order to define the upwind bilinear form we follow the ideas of [acosta-soba_KS_2022, acosta2023structure].\nFirst, we split the mobility function for into its increasing and decreasing parts, denoted respectively by and , as follows:\nTherefore,\nNotice that .\nFollowing the work in [acosta-soba_upwind_2022], we can define the following upwind form for any and :\nwhere on every .\nNonetheless, if we want to ensure a discrete energy law, as was done in [acosta-soba_KS_2022, acosta2023structure], we need to introduce the following hypothesis:\nThe mesh of is structured in the sense that, for any interior interface , the line between the barycenters of and is orthogonal to .\nUnder this hypothesis, we can consider the following consistent approximation on every , as done in [acosta-soba_KS_2022, acosta2023structure]:\nwhere is the distance between the barycenters of the triangles of the mesh that share .\nTherefore, we can extend the definition of the upwind bilinear form (3.2.1 ###reference_a###) as follows:\nThis upwind approximation allows us to obtain both a discrete maximum principle and an energy-stability property as shown in [acosta2023structure] for a tumor model based on the Cahn-Hilliard equation with degenerate mobility.\nNotice that the upwind bilinear form given in (14 ###reference_###), can be seen as a particular case of given in (3.2.1 ###reference_a###), changing by , but now we have not truncated the transported variable .\nIn fact, it is not necessary to truncate in to preserve the pointwise bounds of due to the local incompressibility of (see [acosta-soba_upwind_2022] for a more detailed explanation)."
40
+ },
41
+ {
42
+ "section_id": "3.2.2",
43
+ "parent_section_id": "3.2",
44
+ "section_name": "3.2.2 Properties of the scheme (10)",
45
+ "text": "The mass of the phase-field variable and its regularization are conserved. In fact, one has\nAs a consequence, since is linear with respect to , the mass of the mixture is also conserved,\nJust need to take in (10c ###reference_.3###) and consider the definitions of the regularization given in (9 ###reference_###), and the density of the mixture given in (3 ###reference_###).\n\u220e\nProvided that in , any solution of (10 ###reference_###)\nand\n satisfy:\n in .\nTo prove that in we may take the following test function\nwhere is an element of such that . We denote the normal vector exterior to . Then, since we can assure, using the local incompressibility constraint (12 ###reference_###), that\nOn the other hand, using that the positive part is an increasing function and that\nwe can obtain (see [acosta-soba_upwind_2022, acosta2023structure])\nConsequently, . Therefore,\nwhich implies, since , that . Hence, \nin .\nSimilarly, taking the following test function\nwhere is an element of such that , we can arrive at in .\nFinally, in is a direct consequence of the definition of the projection given in (9 ###reference_###).\n\u220e\nThe next Corollary is a direct consequence of the previous result.\nProvided that in , the density of the mixture satisfies in .\nThe following Lemma is a technical result that we are going to use when computing the discrete energy law.\nThe following expression holds\nFirst, notice that we can rewrite the term as follows\nThen, by definition and due to ,\nFinally, using (10b ###reference_.2###),\nwhat yields (22 ###reference_###).\n\u220e\nThe following discrete energy law holds:\nwhere the energy functional is defined in (6 ###reference_###).\nFirst, take and in (10 ###reference_.x1###)\u2013(10b ###reference_.2###). Consider that\nand, by definition of given in (15 ###reference_###),\nThen, using (24 ###reference_###) and (3.2.2 ###reference_2###) we can arrive at the following expression\nNow, if we test (10c ###reference_.3###)\u2013(10d ###reference_.4###) with and and we add the resulting expressions and (26 ###reference_###), we obtain, using (22 ###reference_###),\nFinally, the following equalities\nyield (3.8 ###reference_0###).\n\u220e\nUsing the definition of the upwind form and the standard procedure for the convex-splitting technique (see e.g. [eyre_1998_unconditionally, guillen-gonzalez_linear_2013]), one can show the following Lemma.\nThe following two inequalities hold:\nThe following result is a direct consequence of Theorem 3.8 ###reference_theorem8### and Lemma 3.9 ###reference_theorem9###.\nThe scheme (10 ###reference_###) satisfies\nIn particular, scheme (10 ###reference_###) is unconditionally energy stable, i.e., .\nThe scheme (10 ###reference_###) is nonlinear so we will need to approximate its solution by means of an iterative procedure such as the nonsmooth Newton\u2019s method (see [clarke1990optimization]).\nHowever, the function that appears in the stabilization term is not subdifferentiable at and, although it is rare in practice that holds exactly due to round-off errors, one might eventually find convergence issues. In this case, several approaches can be carried out to improve the convergence of the algorithm. For instance, one may use an iterative procedure that does not rely on the Jacobian of the whole system such as a fixed point algorithm. Conversely, if we want to use a higher order procedure depending on the Jacobian like the nonsmooth Newton\u2019s method, one may avoid the use of the ) function regularizing the term as follows\nfor small. This modification preserves the mass conservation and the pointwise bounds but introduces a modification in the discrete energy law, see Theorem 3.11 ###reference_theorem11###.\nThe following result can be proved using the same procedure in Theorem 3.8 ###reference_theorem8### and Corollary 3.10 ###reference_theorem10###.\nIf we regularize the stabilization term in the equation (10 ###reference_.x1###), using defined in (30 ###reference_###) for a certain ,\nthe following discrete energy law holds:"
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Numerical experiments",
51
+ "text": "We have carried out the following numerical experiments in the spatial domain . Moreover, we have set the following values of the parameters , , and , unless otherwise specified. Also, we have chosen a constant viscosity, . Following the Remark 3.1 ###reference_theorem1###, we have chosen the pair of \u201cinf-sup\u201d stable spaces . Moreover, to comply with Hypothesis 1 ###reference_1###, we have used a triangular mesh resulting from halving a squared mesh using the diagonals.\nTo compute the approximations we have used the finite element library FEniCSx (see [BasixJoss, AlnaesEtal2014, ScroggsEtal2022]) coupled with PyVista for the visualization of the results (see [sullivan2019pyvista]). The source code for our implementation is hosted on GitHub111https://github.com/danielacos/Papers-src ###reference_###. On the one hand, an iterative Newton solver has been used to approximate the nonlinear problem. In this sense, the modified stabilization term with has been used in the scheme (10 ###reference_###) to avoid convergence issues. On the other hand, we have used the default iterative linear solver, GMRES (generalized minimal residual method), and preconditioner, computed using an incomplete LU factorization (ILU), of PETSc (see [petsc-user-ref, DalcinPazKlerCosimo2011]) for solving the resulting linear systems.\nWe must be careful when dealing with an ill-posed nonlinear problem if we want Newton\u2019s method to converge. To overcome this issue in the case of the approximation (10 ###reference_###), we have added a penalty term to the LHS of (10b ###reference_.2###) with very small (in practice, we have chosen ). In this way, we enforce the -mean constraint on the approximation of the pressure and Newton\u2019s method does converge. In fact, a posteriori, we can check that this additional term has not severely affected the approximation obtained in two different manners. On the one hand, taking into account the of the approximation of we observe that the term has been at most of order . On the other hand, the pointwise bounds have been preserved despite the crucial role that the local incompressibility constraint (12 ###reference_###) plays in Theorem 3.5 ###reference_theorem5###.\nCertainly, many other ways of enforcing the -mean pressure constraint in the nonlinear system can be explored. For instance, another interesting possibility could be adding a penalty term , with , to the LHS of (10b ###reference_.2###) as done in [pacheco2023optimal].\nIn all the figures shown in this section, we plot both the phase field variable (in red/blue) and the following scaled vector field (in white)\nAs a reference, the computational time for these tests in a personal computer with Intel Core i7-6700 3.40GHz using 8 threads has been the following: 10 hours to compute the reference solution in Test 4.1 ###reference_###, around 1.5 hours for Test 4.2 ###reference_###, around 24 hours for Test 4.3 ###reference_### and around 33 hours for Test 4.4 ###reference_###."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Accuracy test",
57
+ "text": "In this case,\nwe define the following initial conditions\nwith , which are plotted in Figure 1 ###reference_###.\n###figure_1### We conduct a preliminary convergence test in which we compare a reference solution in a very refined mesh (, degrees of freedom) with the approximation in a less refined mesh. In this way, with fixed, we can remove the error introduced by the time discretization in each of the different schemes. In any case, we would like to emphasize that such a test for these sophisticated schemes involving several different discrete spaces and projection operators is nontrivial and the results obtained only provide an estimation of the possible order of convergence of the proposed approximations.\nThe results of the test at are shown in Tables 1 ###reference_### and 2 ###reference_###. It is worth mentioning that, as in [acosta-soba_upwind_2022] for the convective Cahn-Hilliard model, order 2 in and order 1 in for the approximation of the variable have been approached. On the other hand, order around 2 in has been obtained for the approximations of and , the latter probably affected by the order of convergence in the approximation of . Finally, order around in seems to have been achieved by the approximation of .\nSeveral works such as [diegel2017convergence, chen2022error, chen2022errorCHNS, styles2008finite] have carried out a careful error analysis of finite element approximations of phase-field models coupled with fluid motion such as the CHNS system or related models. However, most of these works have focused on the case of constant or non-degenerate mobility and constant density and their results are based on the energy-stability property of the proposed approximations. It is left for a future work to study whether these techniques can be extended and applied to derive error estimates for our proposed approximation (10 ###reference_###)."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Mixing bubbles",
63
+ "text": "For this test we keep the same initial conditions as in the previous test but with . Again, this initial condition can be seen in Figure 1 ###reference_###.\n###figure_2### ###figure_3### ###figure_4### In Figure 2 ###reference_### we have plotted the evolution in time of the approximation obtained using both the scheme (10 ###reference_###) with ( degrees of freedom) and . On the other hand, in Figure 3 ###reference_### (left) we can observe how the bounds are preserved as predicted by the previous analytical results. In addition, in Figure 3 ###reference_### (right) one may observe how the energy decreases as predicted by the theory above.\n###table_1### ###figure_5### ###figure_6### ###table_2### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12###"
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "A heavier bubble falling in a lighter medium",
69
+ "text": "Now, we perform a test in which we define the following initial condition: and\na bubble of density in a lighter medium of density , plotted in Figure 4 ###reference_### (). Moreover, we have added a term on the right-hand side of equation (1a ###reference_1###) acting as the gravitational forces pushing the heavier bubble down to the bottom of the domain . In our case, we have chosen and we have treated this term implicitly in (10 ###reference_###).\nIn this case, we have shown in Figure 4 ###reference_### the evolution in time of the solution using (10 ###reference_###) with and . The result is qualitatively similar to the ones shown in previous studies such as [tierra_guillen_abels_2014]. Also, the bounds are preserved as shown in Figure 5 ###reference_### (left). In this case, the energy does not necessarily decrease due to the gravitational forces as one may observe in Figure 5 ###reference_### (right).\n###table_3### ###figure_13### ###figure_14###"
70
+ },
71
+ {
72
+ "section_id": "4.4",
73
+ "parent_section_id": "4",
74
+ "section_name": "A Rayleigh-Taylor type instability",
75
+ "text": "Finally, we carry out a benchmark Rayleigh-Taylor type instability test based on the one shown in [tierra_guillen_abels_2014] for which we define the following initial condition: and\nplotted in Figure 6 ###reference_### (). Again, we add the gravity term with in the RHS of equation (1a ###reference_1###).\n###table_4### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### The evolution in time of the solution using (10 ###reference_###) with and can be seen in Figure 6 ###reference_###. Again, despite the difficulty of this test due to the fast dynamics involved, the results are qualitatively similar to the ones shown in previous works such as [tierra_guillen_abels_2014]. In Figure 7 ###reference_### (left) we plot the evolution of the maximum and minimum of the regularized phase-field function, where we can observe that the bounds are indeed preserved as predicted by the theory.\nIn addition, one may observe in Figure 7 ###reference_### (right) the behavior of the discrete energy.\n###table_5### ###figure_21### ###figure_22###"
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusion",
81
+ "text": "In this work we have developed a robust, structure-preserving approximation, given in (10 ###reference_###), of the CHNS model with variable density (1 ###reference_###). To our best knowledge this is the first approximation of a CHNS model with a Ginzburg-Landau polynomial potential and degenerate mobility that ensures the mass-conservation, pointwise bounds and energy-stability properties at the same time.\nThis approximation combines the ideas of the previous works [acosta-soba_upwind_2022, acosta2023structure] to preserve the pointwise bounds of the phase-field variable as shown in Theorem 3.5 ###reference_theorem5###. In this regard, we have used a finite element approximation for the Navier-Stokes fluid flow with discontinuous pressure that preserves the incompressibility of the velocity locally in each of the elements of the mesh , see (12 ###reference_###). In addition, a carefully developed upwind discontinuous Galerkin approximation for the Cahn-Hilliard part has been chosen.\nMoreover, the ideas in [acosta-soba_KS_2022, acosta2023structure] about approximating the normal derivative of the chemical potential in a structured mesh, (20 ###reference_###), and the bilinear form (21 ###reference_###) have been employed. These ideas have been combined with novel stabilization techniques such as (16 ###reference_###) and (22 ###reference_###), and the stabilization term (15 ###reference_###) that was previously developed in [tierra_guillen_abels_2014]. This approach has led us to the discrete energy-stability property shown in Theorem 3.8 ###reference_theorem8###.\nFinally, the theoretical discussion has been complemented with several numerical experiments where the good properties of the approximation proposed are manifested. In Test 4.1 ###reference_###, a preliminary accuracy test was carried out where second order of convergence seems to be achieved in . Then, a qualitative Test 4.2 ###reference_### was computed, where the discrete energy-stability property has been exhibited. Finally, two benchmark problems where the action of gravitational forces has been taken into account were conducted: a heavier bubble in a lighter medium (Test 4.3 ###reference_###) and a Rayleigh-Taylor type instability (Test 4.4 ###reference_###). Throughout these tests it could be seen how the pointwise bounds are preserved at the discrete level for the phase-field variable.\nDespite the robustness and good properties of this new numerical approximation, we would like to mention that there is still much room for improvement. In particular, the main drawback of this numerical scheme is the computational cost inherent to such a fully coupled approximation.\nIn this sense, we have also explored the idea of developing a decoupled property-preserving approximation of (1 ###reference_###) by means of a rotational pressure projection technique following the previous work in [liu2022pressure]. However, this has been finally left for a future work due to the number of difficulties related to such kind of approximations. On the one hand, applying a rotational projection technique to a model with variable viscosity is not trivial as shown in [deteix2018improving, deteix2019shear, plasman2020projection]. On the other hand, developing a stable decoupled approximation for a system involving variable densities requires carefully adjusting the intermediate steps as in [pyo2007gauge, guermond2000projection, guermond2009splitting]. In addition, preserving both the pointwise bounds for the phase-field variable and the energy law of the system at the discrete level requires imposing additional restrictions, such as (12 ###reference_###) and (20 ###reference_###), on the techniques implemented. A preliminary work on a decoupled approximation for this system (1 ###reference_###), in the case of constant viscosity, that preserves the pointwise bounds can be seen in [acosta2024analysis, Section 6.5]."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {
86
+ "1": {
87
+ "table_html": "<figure class=\"ltx_table ltx_figure_panel ltx_align_center\" id=\"S4.T1\">\n<table class=\"ltx_tabular ltx_figure_panel ltx_align_middle\" id=\"S4.SS1.28.28\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.SS1.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_r ltx_border_t\" id=\"S4.SS1.4.4.4.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.SS1.4.4.4.5.1\" style=\"font-size:90%;\">Variable</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.SS1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.SS1.3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" colspan=\"2\" id=\"S4.SS1.4.4.4.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS1.28.28.29.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.28.28.29.1.1\"><span class=\"ltx_text\" id=\"S4.SS1.28.28.29.1.1.1\" style=\"font-size:90%;\">Error</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.28.28.29.1.2\"><span class=\"ltx_text\" id=\"S4.SS1.28.28.29.1.2.1\" style=\"font-size:90%;\">Error</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.28.28.29.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.SS1.28.28.29.1.3.1\" style=\"font-size:90%;\">Order</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.28.28.29.1.4\"><span class=\"ltx_text\" id=\"S4.SS1.28.28.29.1.4.1\" style=\"font-size:90%;\">Error</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.28.28.29.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.SS1.28.28.29.1.5.1\" style=\"font-size:90%;\">Order</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.28.28.29.1.6\"><span class=\"ltx_text\" id=\"S4.SS1.28.28.29.1.6.1\" style=\"font-size:90%;\">Error</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S4.SS1.28.28.29.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.SS1.28.28.29.1.7.1\" style=\"font-size:90%;\">Order</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS1.12.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S4.SS1.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.6.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.7.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.8.8.8.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.9.9.9.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.10.10.10.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.11.11.11.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S4.SS1.12.12.12.8\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS1.20.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S4.SS1.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.15.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.16.16.16.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.17.17.17.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.18.18.18.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.19.19.19.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S4.SS1.20.20.20.8\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS1.28.28.28\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S4.SS1.21.21.21.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.22.22.22.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.23.23.23.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.24.24.24.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.25.25.25.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.26.26.26.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.27.27.27.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr ltx_border_tt\" id=\"S4.SS1.28.28.28.8\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Errors and convergence orders at in .</figcaption>\n</figure>",
88
+ "capture": "Table 1: Errors and convergence orders at in ."
89
+ },
90
+ "2": {
91
+ "table_html": "<figure class=\"ltx_table ltx_figure_panel ltx_align_center\" id=\"S4.T2\">\n<table class=\"ltx_tabular ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S4.SS1.48.20\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.SS1.32.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_ll ltx_border_r ltx_border_t\" id=\"S4.SS1.32.4.4.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.SS1.32.4.4.5.1\" style=\"font-size:90%;\">Variable</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.SS1.29.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.SS1.30.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.SS1.31.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_rr ltx_border_t\" colspan=\"2\" id=\"S4.SS1.32.4.4.4\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS1.48.20.21.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.48.20.21.1.1\"><span class=\"ltx_text\" id=\"S4.SS1.48.20.21.1.1.1\" style=\"font-size:90%;\">Error</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.48.20.21.1.2\"><span class=\"ltx_text\" id=\"S4.SS1.48.20.21.1.2.1\" style=\"font-size:90%;\">Error</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.48.20.21.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.SS1.48.20.21.1.3.1\" style=\"font-size:90%;\">Order</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.48.20.21.1.4\"><span class=\"ltx_text\" id=\"S4.SS1.48.20.21.1.4.1\" style=\"font-size:90%;\">Error</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.48.20.21.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.SS1.48.20.21.1.5.1\" style=\"font-size:90%;\">Order</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.SS1.48.20.21.1.6\"><span class=\"ltx_text\" id=\"S4.SS1.48.20.21.1.6.1\" style=\"font-size:90%;\">Error</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S4.SS1.48.20.21.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.SS1.48.20.21.1.7.1\" style=\"font-size:90%;\">Order</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS1.40.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S4.SS1.33.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.34.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.35.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.36.8.8.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.37.9.9.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.38.10.10.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.SS1.39.11.11.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_tt\" id=\"S4.SS1.40.12.12.8\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.SS1.48.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_ll ltx_border_r ltx_border_tt\" id=\"S4.SS1.41.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.42.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.43.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.44.16.16.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.45.17.17.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.46.18.18.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_tt\" id=\"S4.SS1.47.19.19.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr ltx_border_tt\" id=\"S4.SS1.48.20.20.8\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Errors and convergence orders at in .</figcaption>\n</figure>",
92
+ "capture": "Table 2: Errors and convergence orders at in ."
93
+ }
94
+ },
95
+ "image_paths": {
96
+ "1": {
97
+ "figure_path": "2310.01522v3_figure_1.png",
98
+ "caption": "Figure 1: Initial condition of Tests 4.1 and 4.2.",
99
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_circle_Pi1_phi_i-0_cropped.png"
100
+ },
101
+ "2(a)": {
102
+ "figure_path": "2310.01522v3_figure_2(a).png",
103
+ "caption": "Figure 2: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.2.",
104
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_circle_Pi1_phi_i-20_cropped.png"
105
+ },
106
+ "2(b)": {
107
+ "figure_path": "2310.01522v3_figure_2(b).png",
108
+ "caption": "Figure 2: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.2.",
109
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_circle_Pi1_phi_i-50_cropped.png"
110
+ },
111
+ "2(c)": {
112
+ "figure_path": "2310.01522v3_figure_2(c).png",
113
+ "caption": "Figure 2: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.2.",
114
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_circle_Pi1_phi_i-100_cropped.png"
115
+ },
116
+ "3(a)": {
117
+ "figure_path": "2310.01522v3_figure_3(a).png",
118
+ "caption": "Figure 3: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.2.",
119
+ "url": "http://arxiv.org/html/2310.01522v3/x1.png"
120
+ },
121
+ "3(b)": {
122
+ "figure_path": "2310.01522v3_figure_3(b).png",
123
+ "caption": "Figure 3: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.2.",
124
+ "url": "http://arxiv.org/html/2310.01522v3/x2.png"
125
+ },
126
+ "4(a)": {
127
+ "figure_path": "2310.01522v3_figure_4(a).png",
128
+ "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.",
129
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-0_cropped.png"
130
+ },
131
+ "4(b)": {
132
+ "figure_path": "2310.01522v3_figure_4(b).png",
133
+ "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.",
134
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-65_cropped.png"
135
+ },
136
+ "4(c)": {
137
+ "figure_path": "2310.01522v3_figure_4(c).png",
138
+ "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.",
139
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-120_cropped.png"
140
+ },
141
+ "4(d)": {
142
+ "figure_path": "2310.01522v3_figure_4(d).png",
143
+ "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.",
144
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-300_cropped.png"
145
+ },
146
+ "4(e)": {
147
+ "figure_path": "2310.01522v3_figure_4(e).png",
148
+ "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.",
149
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-450_cropped.png"
150
+ },
151
+ "4(f)": {
152
+ "figure_path": "2310.01522v3_figure_4(f).png",
153
+ "caption": "Figure 4: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.3.",
154
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_bubble_Pi1_phi_i-2500_cropped.png"
155
+ },
156
+ "5(a)": {
157
+ "figure_path": "2310.01522v3_figure_5(a).png",
158
+ "caption": "Figure 5: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.3.",
159
+ "url": "http://arxiv.org/html/2310.01522v3/x3.png"
160
+ },
161
+ "5(b)": {
162
+ "figure_path": "2310.01522v3_figure_5(b).png",
163
+ "caption": "Figure 5: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.3.",
164
+ "url": "http://arxiv.org/html/2310.01522v3/x4.png"
165
+ },
166
+ "6(a)": {
167
+ "figure_path": "2310.01522v3_figure_6(a).png",
168
+ "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.",
169
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-0_cropped.png"
170
+ },
171
+ "6(b)": {
172
+ "figure_path": "2310.01522v3_figure_6(b).png",
173
+ "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.",
174
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-125_cropped.png"
175
+ },
176
+ "6(c)": {
177
+ "figure_path": "2310.01522v3_figure_6(c).png",
178
+ "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.",
179
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-200_cropped.png"
180
+ },
181
+ "6(d)": {
182
+ "figure_path": "2310.01522v3_figure_6(d).png",
183
+ "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.",
184
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-300_cropped.png"
185
+ },
186
+ "6(e)": {
187
+ "figure_path": "2310.01522v3_figure_6(e).png",
188
+ "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.",
189
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-800_cropped.png"
190
+ },
191
+ "6(f)": {
192
+ "figure_path": "2310.01522v3_figure_6(f).png",
193
+ "caption": "Figure 6: Evolution of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5 over time in Test 4.4.",
194
+ "url": "http://arxiv.org/html/2310.01522v3/extracted/6030190/img/NSCH_DG-UPW_coupled_rayleigh_Pi1_phi_i-3500_cropped.png"
195
+ },
196
+ "7(a)": {
197
+ "figure_path": "2310.01522v3_figure_7(a).png",
198
+ "caption": "Figure 7: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.4.",
199
+ "url": "http://arxiv.org/html/2310.01522v3/x5.png"
200
+ },
201
+ "7(b)": {
202
+ "figure_path": "2310.01522v3_figure_7(b).png",
203
+ "caption": "Figure 7: Left, maximum and minimum of \u03a01h\u2062\u03d5subscriptsuperscript\u03a0\u210e1italic-\u03d5\\Pi^{h}_{1}\\phiroman_\u03a0 start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT italic_\u03d5. Right, discrete energy. Test 4.4.",
204
+ "url": "http://arxiv.org/html/2310.01522v3/x6.png"
205
+ }
206
+ },
207
+ "validation": true,
208
+ "references": [],
209
+ "url": "http://arxiv.org/html/2310.01522v3"
210
+ }
20241127/2310.11083v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2311.10270v5.json ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Multiscale Hodge Scattering Networks for Data Analysis",
3
+ "abstract": "We propose new scattering networks for signals measured on simplicial complexes, which we call Multiscale Hodge Scattering Networks (MHSNs).\nOur construction is based on multiscale basis dictionaries on simplicial complexes, i.e., the -GHWT and -HGLET, which we recently developed for simplices of dimension in a given simplicial complex by generalizing the node-based Generalized Haar-Walsh Transform (GHWT) and Hierarchical Graph Laplacian Eigen Transform (HGLET).\nThe -GHWT and the -HGLET both form redundant sets (i.e., dictionaries) of multiscale basis vectors and the corresponding expansion coefficients of a given signal.\nOur MHSNs use a layered structure analogous to a convolutional neural network (CNN) to cascade the moments of the modulus of the dictionary coefficients.\nThe resulting features are invariant to reordering of the simplices (i.e., node permutation of the underlying graphs).\nImportantly, the use of multiscale basis dictionaries in our MHSNs admits a natural pooling operation that is akin to local pooling in CNNs, and which may be performed either locally or per-scale.\nThese pooling operations are harder to define in both traditional scattering networks based on Morlet wavelets, and geometric scattering networks based on Diffusion Wavelets.\nAs a result, we are able to extract a rich set of descriptive yet robust features that can be used along with very simple machine learning methods (i.e., logistic regression or support vector machines) to achieve high-accuracy classification systems with far fewer number of parameters to train than most modern graph neural networks.\nFinally, we demonstrate the usefulness of our MHSNs in three distinct types of problems: signal classification, domain (i.e., graph/simplex) classification, and molecular dynamics prediction.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Scattering Transforms were introduced by Mallat in [1 ###reference_b1###] as a method for feature extraction for signals and images. These features are translation quasi-invariant, stable to deformation, and preserve high-frequency information from the input which make them ideal for a wide variety of data classification problems, e.g., texture image classification [2 ###reference_b2###, 3 ###reference_b3###]. In addition, their computational architecture closely resembles a convolutional neural networks (CNNs), which allows for fast, GPU-friendly computation. In fact, these networks are often thought of as a type of CNN, with predetermined wavelet filter banks as their convolution filters and a pointwise modulus operation as their activation function. A key advantage of these networks over traditional CNNs is that since the filter banks do not need to be learned from input data, they are much less data-hungry. Additionally, they are more interpretable since each channel in the hidden representation is a deterministic cascade of wavelet transform convolutions with nonlinear activation and averaging operators.\nMore recently, Gao et al. introduced an analogous network architecture for node-signals on undirected graphs [4 ###reference_b4###], which they named as \u201cGeometric Scattering (Networks).\u201d In this context, invariance to node-permutation takes the place of translation-in variance. This is achieved in a manner similar to PointNet [5 ###reference_b5###], by aggregating the node features through either a sum or max-values operation into a single value for each channel. This leads to a deformation-stable feature extractor that can be utilized in concert with a very simple learning model (normally logistic regression or support vector machine) to achieve near state-of-the-art classification results on many datasets, with far fewer training examples than CNN-based approaches often require. As a result, these networks generate descriptive yet robust features that can be used along with very simple machine learning methods (i.e., support vector machines or logistic regression) to achieve high-accuracy classification systems with only small amounts of training data and with far fewer number of parameters to train.\nIn this article, we advance this line of research to apply to signals defined on arbitrarily high-dimensional simplicial structures, i.e., edges, triangles, pyramids, and their -dimensional analogous. Our methods differ from previous work in two key ways. First, previous scattering transform networks have applied only to point-valued signals whereas our construction generalizes to high-dimensional structures. Second, we utilize the -Hierarchical Graph Laplace Transforms (-HGLET) [6 ###reference_b6###, 7 ###reference_b7###] and -Generalized Haar-Walsh Transform (-GHWT)[8 ###reference_b8###, 7 ###reference_b7###] as the wavelet banks in the transforms. Previous work has mostly been based on Morlet wavelets for images and Diffusion Wavelets [9 ###reference_b9###] for graph-based signals. However, we find that the bipartition tree induced by the multiscale transforms we proposed in [7 ###reference_b7###] allows us to form sparser approximations which in turn lead to more efficient feature extraction and therefore more expressive networks. Additionally, the multiscale structure of these bases allows us to easily define local pooling operations which can boost the performance of scattering networks in many applications."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Comparison with Related Works",
15
+ "text": "There has been a growth in recent interest in studying signals defined on edges, triangles, and higher-dimensional substructures within graph structured data [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###]. Applications in computer vision [15 ###reference_b15###, 16 ###reference_b16###], statistics [17 ###reference_b17###], topological data analysis [14 ###reference_b14###, 18 ###reference_b18###], and network analysis [19 ###reference_b19###] have benefited from the study of high-dimensional simplicial complexes. Convolution-based simplicial neural networks have shown remarkable results in these new domains [20 ###reference_b20###]. We extend this line of\nresearch by defining scattering networks on these higher-dimensional domains.\nScattering networks [2 ###reference_b2###] were initially introduced as a tool to explain the success of CNNs on many computer vision problems. These networks had many of the invariant and equivariant properties that make CNNs desirable, but did not contain any learnable \u2018filters\u2019, and instead employed a cascade of wavelet convolutions and contractive nonlinearities. Later, Gao et al. successfully generalized these networks to apply to graph convolutional networks [4 ###reference_b4###]. Our work further generalizes these approaches.\nThe main ways our MHSNs differ from Geometric Scattering Networks (GSNs) [4 ###reference_b4###] and Deep Haar Scattering Networks (DHSNs) [21 ###reference_b21###] are:\n1) MHSNs accept arbitrary simplicial complexes while GSNs/DHSNs were designed for nodes only; and\n2) GSNs and DHSNs are based on the Diffusion Wavelets of Coifman and Maggioni [9 ###reference_b9###] and the Haar transform, respectively, and hence they are not based on the hierarchical partitioning of a given graph, while MHSNs are built over the richer HGLET/GHWT dictionaries and more amenable for analysis since they are composed of a collection of orthonormal bases (ONBs).\nHodgelets [22 ###reference_b22###] use a kernel defined in the spectral domain, similar to the spectral graph wavelet transform [23 ###reference_b23###] to define another family of wavelet-like frames for signals on simplicial complexes. Topological Slepians [24 ###reference_b24###] also form a localized basis dictionary on a given collection of -simplices, but their construction is based on the maximization of primal domain concentration of vectors subject to the dual domain (the frequency domain) support set. However, both Hodgelets and Topological Slepians are difficult to use for scattering transform type representations since they are not hierarchically arranged.\nRecently, Chew et al. introduced a method for windowed scattering transforms which achieve local-pooling like operations [25 ###reference_b25###]. However, since the underlying topology of the graph/complex is non-Euclidean, it may be difficult to consistently define the local windows across multiple graphs [26 ###reference_b26###, 27 ###reference_b27###]. It may be possible to use the partitioning scheme proposed in [7 ###reference_b7###] for these windows, but defining appropriate wavelet families for such a hybrid approach requires further study."
16
+ },
17
+ {
18
+ "section_id": "2",
19
+ "parent_section_id": null,
20
+ "section_name": "Hodge Laplacians and Multiscale Basis Dictionaries",
21
+ "text": "In this section we review some basic Hodge theory to define the Hodge Laplacian on simplicial complexes and then summarize the construction of multiscale basis functions on these spaces. For a more thorough introduction into Hodge Theory see [10 ###reference_b10###, 12 ###reference_b12###, 16 ###reference_b16###] and for a more detailed explanation of multiscale basis dictionaries see [28 ###reference_b28###, 7 ###reference_b7###]."
22
+ },
23
+ {
24
+ "section_id": "2.1",
25
+ "parent_section_id": "2",
26
+ "section_name": "Simplicial Complexes and Boundary Operators",
27
+ "text": "In this subsection we review concepts from algebraic topology to formally define simplicial complexes and introduce some notions of how two simplices can be \u201cadjacent.\u201d For a more thorough review, see [10 ###reference_b10###, 12 ###reference_b12###]. Given a vertex (or node) set , a -simplex is a -subset of .\nA face of is a -subset of , and so has faces.\nA co-face of is a -simplex, of which is a face.\nA simplicial complex is a collection of simplices closed under subsets, where if , then .\nIn particular, if , so does each face of .\nLet , and for each , let denote the set of -simplices in , and let be the space of real-valued functions on .\nWhen , .\nWe also refer to as a -complex to note that .\nLet a -region of refer to any nonempty subset of .\nLet be a simplicial complex, and , for some .\nWhen share a face, they are weakly adjacent, denoted by .\nWhen , additionally they both share a co-face, their hull, denoted by .\nIf , , and , then are strongly adjacent, denoted by .\nIf , but in , then are -adjacent, denoted . Figure 1 ###reference_### demonstrates these various adjacencies among simplices in a toy -complex.\n###figure_1### Suppose , , and is its face.\nThen, for some .\nDefine the natural parity of with respect to its face as .\nAn oriented simplex further has an orientation which indicates whether its parity with its faces is the same as, or opposite to, its natural parity.\nWhen , we say is in natural orientation.\nFor example, a directed edge for is in natural orientation, while if , .\nAn oriented simplicial complex contains at most one orientation for any given simplex.\nGiven an oriented simplicial complex , for each ,\nthe boundary operator is a linear operator , where for , , the corresponding matrix entries are .\nLikewise, the coboundary operator for each is just , the adjoint to .\nThe entries of express relative orientation between simplex and face, and they are a natural way to construct functions taking local signed averages, according to adjacency in the simplicial complex."
28
+ },
29
+ {
30
+ "section_id": "2.2",
31
+ "parent_section_id": "2",
32
+ "section_name": "Hodge Laplacian",
33
+ "text": "The boundary operators just introduced represent discrete differential operators encoding the structure of -regions in a simplicial complex, and so can be building blocks towards a spectral analysis of functions on those regions.\nFor analyzing functions on -simplices with , we will construct operators based on the Hodge Laplacian, or -Laplacian.\nAs in [15 ###reference_b15###], the combinatorial -Laplacian is defined for -simplices as\nVarious forms of weighting and normalization are possible, with corresponding advantages and difficulties, and different interpretations of the resulting Hodge Laplacian\u2019s Fiedler vector, as explored in [29 ###reference_b29###, Chap. 4].\nIn our numerical experiments, we choose the symmetrically normalized, weighted Hodge Laplacian, defined as in [14 ###reference_b14###], as follows.\nFor each , let refer to a diagonal matrix, whose diagonal entries contain an assignment of positive weights to each -simplex in .\nOne such choice, as in [14 ###reference_b14###], is to set a simplex\u2019s weight as its degree, by taking , , and , where .\nDefine the normalized boundary matrix .\nThen the symmetrically normalized, weighted Hodge Laplacian is defined as\nThrough the rest of this article, when we wish to refer to some variant of the Hodge Laplacian calculated on a -region , without specifying a choice of normalization and/or weighting, we will use .\nWhen , we simplify to ."
34
+ },
35
+ {
36
+ "section_id": "2.3",
37
+ "parent_section_id": "2",
38
+ "section_name": "The -HGLET",
39
+ "text": "The -HGLET is a generalization of the Hierarchical Graph Laplacian Eigen Transform (HGLET) [30 ###reference_b30###] from functions on the nodes of a graph, to functions on the -simplices in a given simplicial complex [7 ###reference_b7###].\nThe HGLET, in turn, can be viewed as a generalization of the\nHierarchical Block Discrete Cosine Transform (HBDCT), which\nis generated by creating a hierarchical bipartition of the signal domain and\ncomputing the DCT of the local signal supported on each subdomain.\nLet be the set of basis vectors in the -HGLET\ndictionary where denotes the level of the partition (with being the\nroot), indicates the partition within the level, and indexes the elements\nwithin each partition in increasing frequency.\nLet refer to the -region consisting of the support of partition at level (or scale) , and let .\nHence and .\nIn order to compute the transform, we first compute the complete set of eigenvectors\n of , and order them by nondecreasing eigenvalues.\nWe then partition into two disjoint -regions and \nby forming the Fiedler vector of .\nWe note that: 1) one can use any other bipartition methods; and 2) bipartitioning with the Fiedler vector in the -region setting requires additional steps vs. the graph setting, because of its rather intricate behaviors; see [7 ###reference_b7###] for the details.\nWe iterate the same procedure for and to generate\nthe eigenvectors and .\nNote that , and that all of the elements in\n are orthogonal to those in since their\nsupports are disjoint. The set form an ONB for vectors in .\nFrom here, we apply this process recursively, generating an ONB for each level\nin the given hierarchical bipartition tree.\nIf the hierarchical bipartition tree terminates at every region containing only a -simplex singleton, then the final level will simply be the standard basis of . Each level of the dictionary contains an ONB whose vectors have the support of roughly half the size of the previous level. There are roughly possible ONBs formed by selecting different covering sets of regions from the hierarchical bipartition tree. We also note that the computational cost of generating the entire dictionary is . See [7 ###reference_b7###] for the actual algorithm to generate the -HGLET on a given and further details."
40
+ },
41
+ {
42
+ "section_id": "2.4",
43
+ "parent_section_id": "2",
44
+ "section_name": "The -GHWT",
45
+ "text": "This basis dictionary is based on the Generalized Haar-Walsh Transform\n(GHWT) [8 ###reference_b8###], which can itself be viewed as a\ngeneralization of the Haar-Walsh wavelet packets [31 ###reference_b31###, Sec. 8.1].\nThis is formed by first generating a hierarchical bipartition tree of as\nfor the -HGLET.\nWe then work in a bottom-up manner, beginning with the finest level \nwhere each region only contains a single element that is the indicator vector\nof that region. We call them scaling vectors and label them\nas .\nFor the next level ,\nwe first assign a constant scaling vector for the support on each region.\nThen, for each region that contains two children in the partition tree, we form a\nHaar vector by subtracting the scaling vector of the child\nelement with a higher index from that of the child element with a lower index.\nThis procedure will form an ONB \n(where is the number of -regions at level and \nor depending on the region ) whose vectors have support of at most 2.\nFor the level , we begin by computing the scaling and Haar vectors as\nbefore. Next, for any region that contains three or more elements, we also\ncompute Walsh vectors by adding and subtracting the Haar vectors\nof the children regions. From here, we form the rest of the dictionary\nrecursively. A full description of this algorithm is given in [30 ###reference_b30###] for the case and in [7 ###reference_b7###] for the general case of .\nNote that like the -HGLET, each level of the dictionary forms an ONB, and at each level, basis vectors have the support of roughly half the size of the parent level. These basis vectors also have the same support as the corresponding -HGLET basis vectors (that is, for all ). However, the computational cost of computing the -GHWT is only ."
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "Multiscale Hodge Scattering Transform",
51
+ "text": "Let the -HGLET or -GHWT dictionary vectors be arranged as\n where each is an ONB\nat scale (or level) with being the finest scale basis, composed of\ndelta functions. Note that this definition of the scale parameter\n is the opposite of that used in the previous sections.\nIn general, we have different levels given by the\nhierarchical bipartition scheme, but in practice, the features extracted by\nlarge values are not very descriptive [4 ###reference_b4###]. Hence,\nwe typically use the first levels.\nLet and let denote evaluated at simplex .\nIn addition, let us write for .\nWe propose to compute the th moment up to some maximum of the 0th-order and 1st-order scattering coefficients:\nand the 2nd-order scattering coefficients:\nAnd higher-order moments can be computed in a similar manner:\nwhere .\nHowever, due to the combinatorial blow-up in the number of features at each\norder, it is rare to use more than 2nd or 3rd-order features. Note that the th order features are computed by applying a multiplication with the appropriate (sparse) weight then applying a pointwise nonlinearity to -order features. Further, as we later find in our numerical experiments, high-order moments are not very useful in practice due to their instability [2 ###reference_b2###, 4 ###reference_b4###, 25 ###reference_b25###].\n{hide}\nWe notate the transform without the sum or normalization factor as . We can write the higher order features as:\nBy defining the operator ,\nwhich maps to ,\nwe can also rewrite the higher-order features Eq. (5 ###reference_###)\nin a more comprehensible manner as:\nThis behavior mimics the architecture of convolutional neural networks (with fixed weights) and has been studied extensively [1 ###reference_b1###]. However, unlike traditional feed-forward networks, rather than only considering the features extracted at end of the cascade of linear and nonlinear layers, we concatenate the representation extracted at each layer for downstream analysis. In general, we refer to the process of extracting these features as the Multiscale Hodge Scattering Transform (MHST). Later we will analyze these features with relatively simple machine learning models such as support vector machines (SVMs) and logistic regression models (LRMs). When working with such models with learnable parameters in concert with these deterministic features, we refer to it as the Multiscale Hodge Scattering Network (MHSN)."
52
+ },
53
+ {
54
+ "section_id": "3.1",
55
+ "parent_section_id": "3",
56
+ "section_name": "Local Pooling",
57
+ "text": "In general, we can gather all of the moments and of orders to have a total of features for a given signal. The summations from to in (3 ###reference_###)\u2013(5 ###reference_###) can be viewed as global pooling operations. In situations where permutation invariance is not required (i.e., all signals are defined on a fixed complex with known node ordering), however, we can omit these sums, i.e., no pooling is done. As a result, we are left with features for each signal.\nWe can also generate intermediate features between these two extremes:\nretain sums over each region at level instead of not summing at all or summing all the regions of level in (3 ###reference_###)\u2013(5 ###reference_###). This can be viewed as local pooling operations and results in a tuple of features rather than a single value as in the original scattering transform. This is similar to the windowed scattering transforms recently proposed by [25 ###reference_b25###], but here we leverage the multiscale decomposition provided by the hierarchical partition tree to avoid introducing a new user parameter to determine the window size. In the case of these local pools, we replace the normalization factor in the averaging operation with the number of elements in the local pool rather than the number in the entire simplex as defined in Eq. (8 ###reference_###). This gives us a total of features, where is the number of local sums taken in the th level, i.e., the number of regions at scale .\nWe denote these transforms as where denotes the level at which the sum has taken place. So indicated the standard max-pooling-like scheme as defined by [2 ###reference_b2###, 4 ###reference_b4###].\nThen denotes the transform without any sums while denotes the transform with local pools determined by the third level (i.e., ) of the partition tree. In general we have:\nNote that the subscript indicates where the final averaging has taken place whereas indicates the index of the basis elements used to compute the feature."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Theoretical Analysis of Multiscale Hodge Scattering Transforms",
63
+ "text": "In this section we establish approximation properties of the multiscale basis dictionaries, then use there results to detail the continuity, invariance and equivalence properties of the MHSNs which make them desirable for signal and domain classification problems. For notational convenience, we let and only consider the first order transform in our formal proofs. However, since can be thought of as applying to the transform , all of the proofs can be trivially generalized to the case.\nFirst, for singleton -elements and of , and signal , we define a distance function and then the associated H\u00f6lder semi-norm as the number of elements in the smallest partition in the tree that constrains both elements, formally we have:\nwhere is a constant in . With these definitions, the dictionary coefficient decay and approximation results of [32 ###reference_b32###, 6 ###reference_b6###] for the GHWT and HGLET can be applied to the -GHWT and -HGLET bases as detailed further in [7 ###reference_b7###].\nFor a simplicial complex equipped with a hierarchical bipartition tree, suppose that a signal is H\u00f6lder continuous with exponent and constant . Then the coefficients with for the -HGLET () and -GHWT () satisfy:\nSee Theorem 3.1 of [6 ###reference_b6###].\n\u220e\nFor a fixed ONB and a parameter , then\nwhere in the best nonlinear -term approximation in the basis , and is defined as\nSee Theorem 3.2 of [6 ###reference_b6###] and Theorem 6.3 of [32 ###reference_b32###].\n\u220e\nNext we establish bounds on the coefficients generate by the multiscale basis. Let indicate the matrix formed by stacking a multiscale basis into a matrix where the th block is the th-level ONB. Next we define a weighted inner product and the associated norm for signals in as:\nIn practice, we can often use , but for some applications more exotic norms, such as those that consider the volume of each face, may be useful.\nLet , and indicate the -norm with respect to the metric . Then we have:\nThis remark is clearly true since is simply a collection of orthogonal matrices, and orthogonal transforms preserve the -norm. Although trivial, this fact will be vital for later proofs. Next we show that the MHST is a non-expansive operation. This allows for powerful nonlinear analysis properties [33 ###reference_b33###] which we detail later.\nLet be the MHST with global pooling formed by the multiscale basis dictionary as defined above in (3 ###reference_###)\u2013(5 ###reference_###), acting on the metric space . Then we have:\nMoreover, let indicate the transforms with local pooling as described in Section 3.1 ###reference_###, then for all we have:\nTo show that the MHST is non-expansive, we will show that each layer is non-expansive. Since the entire transform is defined by a cascade of these layers (with modulus operations taken between the layers), it will also be non-expansive. Then:\nSince this inequality holds for each and , it is clear that taking the -norm over the whole collection of features will also hold. Then, since each order of the transform features can be formed by applying to the previous order we have:\n. The proof for the local summation follows exactly as above.\n\u220e\nNext we show that our networks are invariant under group operations which preserve the inner product , such as permutations of the -simplices. That is, relabeling indices of the elements of does not affect the output of the transform. For example, given a signal defined on the triangular faces of a simplex, the indexing of the triangles does not effect the globally pooled transform and permuting this indexing results in an equivalent permutation of the non-pooled signal. This theorem is analogous to Theorem 3 in [25 ###reference_b25###] and Proposition 4.1 in [34 ###reference_b34###], but generalizes to any , rather than strictly to the nodes (=0 case).\nSuppose is a group of transforms on the elements of (e.g., permutations of the -simplices). Furthermore, given any , let denote the operator induced by and the analogous transform on , which is the permuted version of . Then both of the following hold:\nfor all , and .\nAs with the previous proof we will show that this holds for an arbitrary layer, and since the entire transform is formed from a cascade of these layers, it will also have this property.\nFirst, denote where is an appropriate version of the -Laplacian for .\nIt is immediately clear that if is an eigenvector of with then is an eigenvector of , i.e., since\nThen Since preserves the inner product on , it is clear that is an ONB for whose atoms are the bases formed from eigenvectors of , where\nThen for arbitrary input we have:\nThe proof for the local summation follows exactly as above.\n\u220e"
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Signal Classification",
69
+ "text": "We first demonstrate the effectiveness of our MHSNs with the article category\nclassification problem using the Science News database [35 ###reference_b35###, 36 ###reference_b36###]. We do not claim state-of-the-art results for this problem, but rather use it to illustrate the advantages of analyzing signals via higher-dimensional simplices. We also show how locally-pooled networks can shatter problems in which traditional globally-pooled scattering networks fail to differentiate between classes.\nThe Science News dataset contains 1042 scientific\nnews articles classified into eight fields: Anthropology, Astronomy;\nBehavioral Sciences; Earth Sciences; Life Sciences; Math/CS; Medicine; Physics.\nEach article is tagged with keywords from a pool of 1133 words selected by the database curators. We determine a simplicial complex from these keywords by computing their word2vec [37 ###reference_b37###] embeddings based on\nGoogle\u2019s publicly available pre-trained model [38 ###reference_b38###].\nWe generate a symmetric -nearest neighbor graph of the embedded words\nand then generate -simplices of the graph.\nTherefore, a -simplex in this keyword graph corresponds to -face, which represents a combination of words.\nNext, we define representations of each article as a signal in each \nas follows. First, for (i.e., a node-valued signal), we define the\nsignal to be one on the nodes representing their keywords and zero\nelsewhere. For we define the signal to be the simplex-wise\naverage of the signal. That is,\nwhere represents the set of nodes forming the th simplex . Note that these signals are highly localized since the keywords are connected through a symmetrized NN graph, and the higher-order signals are built from the adjacency of the resulting complex. To showcase the robustness of our approach, we report results using both and nearest neighbor graphs.\nTables 1 ###reference_### and 2 ###reference_### compare the performance of our proposed methods with the other simpler methods, i.e., the raw expansion coefficients of the signals relative to the standard ONBs (Delta; Fourier) or the dictionaries (Diffusion; HGLET; GHWT).\nThe parameters for the feature computations were set as .\nFor each , we performed the five-fold cross-validation, i.e., we randomly split these 1042 signals into 80% training and 20%\ntest sets and repeat the experiment 5 times with different train/test splits.\nIn every case we use the -regularized LRM provided by scikit-learn [39 ###reference_b39###] without any additional hyperparameter tuning to compute the classification.\nSeveral observations are in order. First, the traditional, globally-pooled scattering networks mostly fail on this task regardless of the wavelet dictionary employed. Since the number of nonzero entries in each signal is similar and therefore the -norms are also similar, global-pooling schemes fail to capture the keyword information (i.e., indices of nonzero entries) in a way that differentiates between the classes and consequently do not produce statistically significant results. The non-pooled features often provide the highest performance, which is not surprising since there are many more features and learnable parameters than the networks with pooling. However, the locally-pooled features almost always perform on par with the non-pooled features. For both the 5 and 10 nearest neighbor graphs, the best overall results are achieved by the , which has the largest number of elements. Similarly, the 10-nearest neighbor graph performs better than the 5-nearest neighbor graphs at the cost of larger .\nWe also observe that the networks based on -HGLET and -GHWT generally outperform those based on Diffusion Wavelets. This is likely due to the highly localized and piecewise constant nature of the input signals, which are well-approximated by these dictionaries [7 ###reference_b7###]. In the next section, where the signals are not localized, we do not observe this difference."
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "Domain Classification",
75
+ "text": "Another vital application of geometric scattering networks and graph neural networks (GNNs) is graph (and simplex) classification. Broadly speaking, this problem consists of predicting a label of a social or chemical graph based on a training set of similar graphs with different configurations (i.e., different numbers of nodes and edges). For example, in the COLLAB dataset [52 ###reference_b52###], each graph represents a network of coauthors of a scientific paper.\nSince the size of the graphs varies greatly within these datasets, we employ only the global-pooling version of our MHSN, akin to the previous efforts reported in [4 ###reference_b4###, 25 ###reference_b25###], which were based on geometric scattering methods.\nWe compute permutation-invariant input features based only on\ntopological information obtainable from the boundary matrices. Since many of the\ngraphs are too small to contain high-degree simplices, we only consider node and\nedge-based features and transforms. Following the methodology developed in\n[4 ###reference_b4###], we set .\nFor the node signals, we first compute\nthe eccentricity and clustering coefficient [HARRIS-ETAL, Sec. 1.2] of\neach node.\nFor each node signal, the number of parameters (MHST coefficients) are\n64 via the formula , hence 128 parameters\nafter concatenating them.\nFor the edge signals, we use the average of the eccentricities of the head and tail nodes of each edge and the number of non-zero off-diagonal terms in the combinatorial Hodge-Laplacian (each such term corresponds to a -adjacent edge [29 ###reference_b29###, Sec. 4.1]).\nFor each domain classification problem we train three models:\n1) using 128 node features; 2) using 128 edge features;\nand 3) using 256 combined features.\nWe then employ a simple SVM with Gaussian radial basis functions to classify\nthe features. Moreover, we tune the hyperparameters controlling the strength\nof the -regularization and the kernel coefficients via the\ncross-validation scheme presented in [4 ###reference_b4###] using the\nsame search space and validation indexes.\nWe compare these results with those obtained by the geometric scattering network (with Diffusion Wavelets) using SVM (GS-SVM) as well as several popular GNN models including the graph convolution network (GCN) [40 ###reference_b40###], universal graph transform (UGT) [41 ###reference_b41###], dynamic graph CNN (DGCNN) [42 ###reference_b42###], graph attention network (GAT) [43 ###reference_b43###], and graph feature network (GFN) [44 ###reference_b44###]. For each competing method, we reproduce the results in the significant figures reported in their original publications; we report to 2 decimal places for our method. More information on the benchmark datasets can be found in A ###reference_###. We remark that, as of this article\u2019s writing, this collection of networks achieves state-of-the-art results on these datasets according to the Papers with Code Leaderboards [45 ###reference_b45###]. Further details on these datasets and their associated classification problems are presented in A ###reference_### and the references therein.\nAlthough our MHSNs do not achieve state-of-the-art results on these datasets, they are very competitive with only a small fraction of the learnable parameters. Moreover, the number of learnable parameters in our models is not tied to the graph size and depends only on the order of the scattering and the number of moments computed. For example, Table 4 ###reference_### compares our methods with the UGT and the GFN, which are the state-of-the-art methods for various graph classification problems. These methods each require more than half a million parameters for some cases (867K for UGT) to achieve results similar to ours, requiring only 256 parameters to learn. As a result, our MHSNs can be implemented and trained on a consumer-level laptop, whereas many of these competing GNNs require specialized hardware."
76
+ },
77
+ {
78
+ "section_id": "7",
79
+ "parent_section_id": null,
80
+ "section_name": "Molecular Dynamics",
81
+ "text": "Our MHSNs can also be used for regression problems where the goal is to predict a continuous property of a simplicial complex (or simply a graph) based on a set of observations of the complex under various conditions. Therefore, they are quite suitable for learning molecular dynamics, particularly the potential energy surface of a molecule, given a few registrations of the molecule and its energies. The Revised Molecular Dynamics 17 (rMD17 dataset) [46 ###reference_b46###] contains 100,000 structures and associated energies of various molecules. However, these structures are taken from a molecular dynamics simulation, i.e., time series data, which is not independent and identically distributed. To overcome this, instead of using the entire dataset, we use five sets of molecule snapshots and the associated potential energies. Each of these sets consists of 1,000 snapshot/energy pairs and is grouped into 800 training and 200 test samples selected by the authors of the dataset [46 ###reference_b46###].\nWe extract a rich set of features for each structure (i.e., a pose or conformation of a molecule) using our MHSNs (without pooling) and then employ a support vector regression (SVR) method with Gaussian radial basis functions to approximate and predict the energy. More specifically, for each molecule, we first compute the average position of each atom across the training set. Then, using these positions, we create a NN-graph (with ) as a template simplicial complex.\nNote that by using this simplicial complex, rather than the molecular-bond graph, we can better capture the geometric information in the pose of the molecule as detailed in [47 ###reference_b47###, 48 ###reference_b48###]. Unlike the domain classification problems in Section 6 ###reference_###,\nthe geometry of the simplicial complex is fixed, so rather than use its geometrically-invariant descriptors, we need to begin\nwith signals that encode the position information of molecules.\nWe then generate the node and edge signals of both training and test sets\nas follows.\nFirst we compute the Euclidean distance matrix of (i.e the Gram matrix of the point-cloud, measured in the Euclidean distance) the node coordinates of each\nsnapshot and assign the corresponding column vector of the distance matrix as\nits node features. This generates a number-of-atoms-dimensional node signal for\neach node, for each snapshot.\nFor an edge signal, we extract edge lengths from the above distance matrix and\ncreate a diagonal matrix of edge lengths. Then, we assign the corresponding\ncolumn vector of this diagonal matrix as its edge features.\nAs with the node-based signal, this gives us number-of-edges-dimensional signal\nto input into our MHSN. For this experiment we use .\nWe do not use simplices of dimension and not set and\n because the molecules are too small.\nTable 5 ###reference_### shows our results for aspirin (21 atoms) and paracetamol (20 atoms) molecules. We compare our MHSNs with several state-of-the-art GNN approaches designed specifically for processing molecular dynamics, including SchNet [47 ###reference_b47###], PaiNN [49 ###reference_b49###], and two variants of Special Orthogonal Networks (SO3Nets) [50 ###reference_b50###, 48 ###reference_b48###].\nWe report both the mean absolute error (MAE) and root mean square error (RMSE) of the energy prediction, which are the standard metrics in the computational chemistry literature. Our MHSNs perform competitively with these approaches while employing roughly 1% as many learnable parameters as the competing methods.\nAdditionally, we again observe that our edge-based analysis outperforms the node-based analysis. This demonstrates that higher-dimensional simplex analysis can be more powerful than node-only approaches, even in cases where the underlying graph may not have many higher-dimensional structures."
82
+ },
83
+ {
84
+ "section_id": "8",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusion",
87
+ "text": "In this article, we proposed the Multiscale Hodge Scattering Transforms/Networks (MHSTs/MHSNs) for robust feature extraction from signals on simplicial complexes that can be used in classification and regression problems, fully utilizing our multiscale basis dictionaries on such simplicial complexes, i.e., -HGLET and -GHWT dictionaries. Our MHSTs/MHSNs also have pooling options for the scattering transform outputs: no-pooling; local-pooling; and global-pooling. Such options allow our tools to apply for various classification and regression problems on simplicial complexes ranging from classification of signals recorded on simplicial complexes to classification of type of simplicial complexes (i.e., domain classification problems) to regression of potential energies in molecular dynamics. We demonstrated that MHSNs provide comparable results with those by the state-of-the-art GNNs with up to a two-order of magnitude reduction in number of learnable parameters. We strongly believe that our success here comes from the structure and organization of our multiscale basis dictionaries that are conveniently arranged in terms of scales and locations, which are suitable and quite amenable for generating scattering transform coefficients.\nWe plan to investigate how we can interpret the MHST coefficients that are identified as important by classification methods such as the LRMs. Because of the nonlinearities used in the MHSTs, converting the MHST coefficients to the features in the primal/original domain is difficult. Along this direction, we plan to examine the optimization method proposed by Weber [51 ###reference_b51###, Chap. 4], which synthesizes an input signal that generates the significant scattering transform coefficients at the specified coefficient indices. In a related line of research, we will explore how the MHST coefficients can be used to identify important relationships within graphs that can be used to narrow the training space of various attention mechanisms/graph transformers for large-scale problems."
88
+ }
89
+ ],
90
+ "appendix": [
91
+ {
92
+ "section_id": "Appendix 1",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix A Description of Domain Classification Datasets",
95
+ "text": ""
96
+ }
97
+ ],
98
+ "tables": {
99
+ "1": {
100
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T1.6\" style=\"width:433.6pt;height:91.3pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-84.7pt,17.7pt) scale(0.719136133320417,0.719136133320417) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.2\">Knn = 5</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.3\">Delta</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.4\">Fourier</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T1.1.1.1.5\">Diffusion</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S5.T1.1.1.1.6\">HGLET</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"4\" id=\"S5.T1.1.1.1.7\">GHWT</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6.7.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.3\">Basis</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.4\">Basis</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.5\">Dict.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.6\">GP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.7\">NP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.8\">Dict.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.9\">GP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.10\">LP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.11\">NP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.12\">Dict.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.13\">GP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.7.1.14\">LP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.6.6.7.1.15\">NP</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.2.1\">\n=0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.2.2\">1133</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.3\">33.971</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.2.4\">33.971</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.5\"><span class=\"ltx_text ltx_ulem_uline ltx_font_bold\" id=\"S5.T1.2.2.2.5.1\">86.603</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.6\">31.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.2.7\"><span class=\"ltx_text ltx_ulem_uline ltx_font_bold\" id=\"S5.T1.2.2.2.7.1\">86.603</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.8\">81.818</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.9\">31.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.10\"><span class=\"ltx_text ltx_ulem_uline ltx_font_bold\" id=\"S5.T1.2.2.2.10.1\">86.603</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.2.11\"><span class=\"ltx_text ltx_ulem_uline ltx_font_bold\" id=\"S5.T1.2.2.2.11.1\">86.603</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.12\">80.861</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.13\">31.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.14\">85.646</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.15\">86.124</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.3.3.3.1\">\n=1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.3.3.3.2\">3273</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.3\">55.502</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.3.3.3.4\">78.947</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.5\">85.646</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.6\">31.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.3.3.3.7\">85.646</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.3.3.8.1\">86.124</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.9\">31.579</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.3.3.3.10.1\">86.124</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.3.3.3.11\">85.646</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.12\">85.603</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.13\">31.579</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.14\">85.646</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.3.15\">85.646</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.4.4.4.1\">\n=2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.4.4.4.2\">1294</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.3\">55.502</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.4.4.4.4\">49.761</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.5\">83.732</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.6\">31.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.4.4.4.7\">83.732</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.8\">83.254</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.9\">31.579</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.4.4.10.1\">84.211</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.4.4.4.11\">83.732</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.12\">83.732</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.13\">31.579</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.14\">83.254</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.15\">83.254</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.5.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.5.5.5.1\">\n=3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.5.5.5.2\">227</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.3\">31.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.5.5.5.4\">31.579</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.5.5.5.5.1\">78.947</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.6\">31.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.5.5.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.5.5.5.7.1\">78.947</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.8\">51.675</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.9\">31.579</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.10\">78.469</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.5.5.5.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.5.5.5.11.1\">78.947</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.12\">51.196</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.13\">31.579</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.14\">78.469</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.15\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.5.5.5.15.1\">78.947</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.6.6.6.1\">\n=4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.6.6.6.2\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.6.6.3\">31.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.6.6.6.4\">31.579</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.6.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.6.6.5.1\">55.981</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.6.6.6\">31.100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.6.6.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.6.6.7.1\">55.981</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.6.6.8\">32.057</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.6.6.9\">31.100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.6.6.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.6.6.10.1\">55.981</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.6.6.6.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.6.6.11.1\">55.981</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.6.6.12\">32.057</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.6.6.13\">37.799</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.6.6.14\">55.502</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.6.6.15\">54.067</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Article category classification accuracy for -NN graph of the Science News dataset for different simplex degrees. Dict.\u00a0implies that the SVM is trained solely on the dictionary coefficients while GP, LP, NP imply scattering networks with global, local, and no pooling, respectively. The best performer for each is indicated in bold while the underlined bold numbers are the best among all \u2019s.</figcaption>\n</figure>",
101
+ "capture": "Table 1: Article category classification accuracy for -NN graph of the Science News dataset for different simplex degrees. Dict.\u00a0implies that the SVM is trained solely on the dictionary coefficients while GP, LP, NP imply scattering networks with global, local, and no pooling, respectively. The best performer for each is indicated in bold while the underlined bold numbers are the best among all \u2019s."
102
+ },
103
+ "2": {
104
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.8\" style=\"width:433.6pt;height:116.2pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-87.2pt,23.2pt) scale(0.71322190869229,0.71322190869229) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.8.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1.2\">Knn = 10</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1.3\">Delta</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1.4\">Fourier</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T2.1.1.1.5\">Diffusion</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S5.T2.1.1.1.6\">HGLET</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"4\" id=\"S5.T2.1.1.1.7\">GHWT</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.8.8.9.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.3\">Basis</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.4\">Basis</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.5\">Dict.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.6\">GP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.7\">NP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.8\">Dict.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.9\">GP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.10\">LP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.11\">NP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.12\">Dict.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.13\">GP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.9.1.14\">LP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.8.8.9.1.15\">NP</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2.1\">\n=0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2.2\">1133</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.3\">35.238</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2.4\">35.238</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.5\">60.952</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.6\">32.381</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2.7\">87.619</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.8\">81.905</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.9\">32.381</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.2.2.2.10.1\">88.571</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2.11\">87.619</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.12\">80.952</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.13\">32.381</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.14\">87.619</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.15\">87.619</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.3.3.1\">\n=1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.3.3.2\">6890</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3.3\">81.905</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.3.3.4\">81.905</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3.5\">86.667</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3.6\">32.381</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.3.3.7\">86.667</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3.8\">85.714</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3.9\">32.381</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3.10\"><span class=\"ltx_text ltx_ulem_uline ltx_font_bold\" id=\"S5.T2.3.3.3.10.1\">89.524</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.3.3.11\">86.667</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3.12\">85.714</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3.13\">32.381</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3.14\"><span class=\"ltx_text ltx_ulem_uline ltx_font_bold\" id=\"S5.T2.3.3.3.14.1\">89.524</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3.15\"><span class=\"ltx_text ltx_ulem_uline ltx_font_bold\" id=\"S5.T2.3.3.3.15.1\">89.524</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.4.4.4.1\">\n=2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.4.4.4.2\">7243</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.3\">76.19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.4.4.4.4\">76.19</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.5\">86.667</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.6\">32.381</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.4.4.4.7\">88.571</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.8\">85.714</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.9\">32.381</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.10\">88.571</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.4.4.4.11\">88.571</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.12\">88.571</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.13\">32.381</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.14\"><span class=\"ltx_text ltx_ulem_uline ltx_font_bold\" id=\"S5.T2.4.4.4.14.1\">89.524</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.15\">88.571</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.5.5.5.1\">\n=3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.5.5.5.2\">4179</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.3\">69.524</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.5.5.5.4\">69.524</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.5\">74.286</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.6\">33.333</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.5.5.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.5.7.1\">86.667</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.5.8.1\">86.667</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.9\">33.333</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.5.10.1\">86.667</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.5.5.5.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.5.11.1\">86.667</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.5.12.1\">86.667</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.13\">33.333</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.5.14.1\">86.667</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.15\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.5.15.1\">86.667</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.6.6.1\">\n=4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.6.6.2\">1740</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.3\">45.714</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.6.6.4\">45.714</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.5\">68.571</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.6\">35.238</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.6.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.6.7.1\">81.905</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.8\">73.333</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.9\">35.238</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.6.10.1\">81.905</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.6.6.6.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.6.11.1\">81.905</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.6.12.1\">81.905</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.13\">33.333</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.6.14.1\">81.905</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.15\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.6.15.1\">81.905</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.7.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.7.7.7.1\">\n=5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.7.7.7.2\">560</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.7.7.3\">33.333</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.7.7.7.4\">33.333</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.7.7.5\">39.048</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.7.7.6\">34.286</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.7.7.7.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.7.7.7.7.1\">73.333</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.7.7.8\">60.952</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.7.7.9\">33.333</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.7.7.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.7.7.7.10.1\">73.333</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.7.7.7.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.7.7.7.11.1\">73.333</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.7.7.12\">60.952</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.7.7.13\">34.286</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.7.7.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.7.7.7.14.1\">73.333</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.7.7.7.15\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.7.7.7.15.1\">73.333</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.8.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.8.8.8.1\">\n=6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.8.8.8.2\">98</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.8.8.8.3\">32.381</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.8.8.8.4\">32.381</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.8.8.8.5\">32.381</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.8.8.8.6\">34.286</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.8.8.8.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.8.8.8.7.1\">62.857</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.8.8.8.8\">39.048</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.8.8.8.9\">35.238</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.8.8.8.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.8.8.8.10.1\">62.857</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.8.8.8.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.8.8.8.11.1\">62.857</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.8.8.8.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.8.8.8.12.1\">62.857</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.8.8.8.13\">35.238</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.8.8.8.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.8.8.8.14.1\">62.857</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T2.8.8.8.15\">60.952</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Article category classification accuracy for -NN graph of the Science News dataset for different simplex degrees. Dict.\u00a0implies that the SVM is trained solely on the dictionary coefficients while GP, LP, NP imply scattering networks with global, local, and no pooling, respectively. The best performer for each is indicated in bold while the underlined bold numbers are the best among all \u2019s. </figcaption>\n</figure>",
105
+ "capture": "Table 2: Article category classification accuracy for -NN graph of the Science News dataset for different simplex degrees. Dict.\u00a0implies that the SVM is trained solely on the dictionary coefficients while GP, LP, NP imply scattering networks with global, local, and no pooling, respectively. The best performer for each is indicated in bold while the underlined bold numbers are the best among all \u2019s. "
106
+ },
107
+ "3": {
108
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S6.T3.1\" style=\"width:433.6pt;height:122.9pt;vertical-align:-0.8pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-38.9pt,11.0pt) scale(0.847832572748797,0.847832572748797) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S6.T3.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.1.1.1\">Graph</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.1.1.2.1\">Node Scattering</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.1.1.3.1\">Edge Scattering</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.1.1.4.1\">Combo</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.1.1.1.1.5\">GS-SVM</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.1.1.1.1.6\">GCN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.1.1.1.1.7\">UGT</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.1.1.1.1.8\">DGCNN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.1.1.1.1.9\">GAT</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T3.1.1.1.1.10\">GFN</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.2.1.1\">COLLAB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.1.2.1.2\">70.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.1.2.1.3\">78.34</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T3.1.1.2.1.4\">80.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.1.2.1.5\">79.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.1.2.1.6\">79.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.1.2.1.7\">77.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.1.2.1.8\">73.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.1.2.1.9\">75.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T3.1.1.2.1.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.2.1.10.1\">81.50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.3.2.1\">DD</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.3.2.2\">60.67</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.3.2.3\">68.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.3.2.4\">72.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.3.2.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.3.2.6\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.3.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.3.2.7.1\">80.23</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.3.2.8\">79.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.3.2.9\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.3.2.10\">79.37</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.4.3.1\">IMDB-B</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.4.3.2\">72.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.4.3.3\">70.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.4.3.4\">73.10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.4.3.5\">71.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.4.3.6\">74.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.4.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.4.3.7.1\">77.04</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.4.3.8\">70.03</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.4.3.9\">70.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.4.3.10\">73.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.5.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.5.4.1\">IMDB-M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.5.4.2\">44.40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.5.4.3\">47.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.5.4.4\">49.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.5.4.5\">48.73</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.5.4.6\">51.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.5.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.5.4.7.1\">53.60</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.5.4.8\">47.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.5.4.9\">47.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.5.4.10\">51.80</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.6.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.6.5.1\">MUTAG</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.6.5.2\">85.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.6.5.3\">86.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.6.5.4\">85.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.6.5.5\">83.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.6.5.6\">85.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.6.5.7\">80.23</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.6.5.8\">79.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.6.5.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.6.5.9.1\">89.40</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.6.5.10\">85.83</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.7.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.7.6.1\">PROTEINS</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.7.6.2\">73.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.7.6.3\">73.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T3.1.1.7.6.4\">75.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.7.6.5\">74.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.7.6.6\">76.00</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.7.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.7.6.7.1\">78.53</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.7.6.8\">75.54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.7.6.9\">74.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.1.1.7.6.10\">76.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.1.1.8.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S6.T3.1.1.8.7.1\">PTC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.1.1.8.7.2\">62.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.1.1.8.7.3\">67.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S6.T3.1.1.8.7.4\">68.28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.1.1.8.7.5\">63.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.1.1.8.7.6\">64.20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.1.1.8.7.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.1.1.8.7.7.1\">69.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.1.1.8.7.8\">58.59</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.1.1.8.7.9\">66.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T3.1.1.8.7.10\">66.60</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Graph classification accuracy on seven datasets. The best performer for each dataset is indicated in bold. </figcaption>\n</figure>",
109
+ "capture": "Table 3: Graph classification accuracy on seven datasets. The best performer for each dataset is indicated in bold. "
110
+ },
111
+ "4": {
112
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T4\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S6.T4.1\" style=\"width:433.6pt;height:159pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-5.5pt,2.0pt) scale(0.975095162614968,0.975095162614968) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S6.T4.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S6.T4.1.1.1.1.2\">Hodge Scattering + SVM</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S6.T4.1.1.1.1.3\">UGT</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S6.T4.1.1.1.1.4\">GFN</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.2.2.1\">Graph</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.2.2.2\">Accuracy</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.2.2.3\"># Parameters</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.2.2.4\">Accuracy</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.2.2.5\"># Parameters</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.2.2.6\">Accuracy</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S6.T4.1.1.2.2.7\"># Parameters</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.3.1.1\">COLLAB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T4.1.1.3.1.2\">80.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.3.1.3\">256</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T4.1.1.3.1.4\">77.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T4.1.1.3.1.5\">866,746</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T4.1.1.3.1.6\">81.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T4.1.1.3.1.7\">68,754</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.4.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.4.2.1\">DD</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.4.2.2\">72.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.4.2.3\">256</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.4.2.4\">80.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.4.2.5\">76,928</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.4.2.6\">79.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.4.2.7\">68,754</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.5.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.5.3.1\">IMDB-B</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.5.3.2\">73.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.5.3.3\">256</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.5.3.4\">77.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.5.3.5\">55,508</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.5.3.6\">73.40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.5.3.7\">68,754</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.6.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.6.4.1\">IMDB-M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.6.4.2\">49.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.6.4.3\">256</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.6.4.4\">53.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.6.4.5\">48,698</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.6.4.6\">51.80</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.6.4.7\">68,818</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.7.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.7.5.1\">MUTAG</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.7.5.2\">85.78</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.7.5.3\">256</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.7.5.4\">80.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.7.5.5\">4,178</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.7.5.6\">85.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.7.5.7\">65,618</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.8.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.8.6.1\">PROTEINS</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.8.6.2\">75.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.8.6.3\">256</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.8.6.4\">78.53</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T4.1.1.8.6.5\">1,878</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.8.6.6\">76.46</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T4.1.1.8.6.7\">65,618</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T4.1.1.9.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S6.T4.1.1.9.7.1\">PTC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T4.1.1.9.7.2\">68.28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S6.T4.1.1.9.7.3\">256</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T4.1.1.9.7.4\">69.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S6.T4.1.1.9.7.5\">12,038</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T4.1.1.9.7.6\">66.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S6.T4.1.1.9.7.7\">65,618</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Comparison of MHSN and state of the art graph classification networks in accuracy and number of learnable parameters</figcaption>\n</figure>",
113
+ "capture": "Table 4: Comparison of MHSN and state of the art graph classification networks in accuracy and number of learnable parameters"
114
+ },
115
+ "5": {
116
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T5\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S7.T5.8\" style=\"width:433.6pt;height:134.6pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-74.7pt,23.1pt) scale(0.743807644809793,0.743807644809793) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S7.T5.8.8\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T5.8.8.9.1\">\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.9.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S7.T5.8.8.9.1.2\">Diffusion+SVR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S7.T5.8.8.9.1.3\">HGLET+SVR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S7.T5.8.8.9.1.4\">GHWT+SVR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.9.1.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S7.T5.8.8.9.1.5.1\">SchNet</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.9.1.6\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S7.T5.8.8.9.1.6.1\">PaiNN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.9.1.7\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S7.T5.8.8.9.1.7.1\">SO3Net I</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.9.1.8\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S7.T5.8.8.9.1.8.1\">SO3Net II</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.8.8.10.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.10.2.1\">Feature Type</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.10.2.2\">Node</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.10.2.3\">Edge</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.10.2.4\">Both</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.10.2.5\">Node</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.10.2.6\">Edge</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.10.2.7\">Both</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.10.2.8\">Node</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.10.2.9\">Edge</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.10.2.10\">Both</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.8.8.11.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"14\" id=\"S7.T5.8.8.11.3.1\">Aspirin</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.8.8.12.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.12.4.1\">MAE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.12.4.2\">4.856</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.12.4.3\">3.132</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.12.4.4\">3.267</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.12.4.5\">4.884</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.12.4.6\">3.135</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.12.4.7\">3.285</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.12.4.8\">4.928</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.12.4.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.8.8.12.4.9.1\">3.075</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.12.4.10\">3.225</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.12.4.11\">13.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.12.4.12\">3.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.12.4.13\">3.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.12.4.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.8.8.12.4.14.1\">2.6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.8.8.13.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.13.5.1\">RMSE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.13.5.2\">6.181</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.13.5.3\">4.144</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.13.5.4\">4.314</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.13.5.5\">6.215</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.13.5.6\">4.129</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.13.5.7\">4.407</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.13.5.8\">6.213</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.13.5.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.8.8.13.5.9.1\">4.123</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.13.5.10\">4.316</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.13.5.11\">18.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.13.5.12\">5.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.13.5.13\">5.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.13.5.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.8.8.13.5.14.1\">3.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.4.4.4.5\"># Parameters</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.4.4.4.6\">924</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.4.4.4.7\">3784</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.4.4.4.8\">4708</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.4.4.4.9\">924</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.4.4.4.10\">3784</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.4.4.4.11\">4708</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.4.4.4.12\">924</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.4.4.4.13\">3784</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.4.4.4.14\">4708</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.1.1.1.1\">\n 432k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.2.2.2.2\">\n 341k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.3.3.3.3\">\n 283k</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.4.4.4.4\">\n 341k</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.8.8.14.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"14\" id=\"S7.T5.8.8.14.6.1\">Paracetamol</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.8.8.15.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.15.7.1\">MAE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.15.7.2\">4.609</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.15.7.3\">2.715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.15.7.4\">2.795</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.15.7.5\">4.723</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.15.7.6\">2.643</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.15.7.7\">2.710</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.15.7.8\">4.748</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.15.7.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.8.8.15.7.9.1\">2.624</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.15.7.10\">2.699</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.15.7.11\">8.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.15.7.12\">2.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S7.T5.8.8.15.7.13\">2.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T5.8.8.15.7.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.8.8.15.7.14.1\">1.4</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.8.8.16.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.16.8.1\">RMSE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.16.8.2\">5.860</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.16.8.3\">3.418</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.16.8.4\">4.116</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.16.8.5\">5.964</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.16.8.6\">3.338</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.16.8.7\">3.424</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.16.8.8\">5.961</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.16.8.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.8.8.16.8.9.1\">3.299</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.16.8.10\">3.408</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.16.8.11\">11.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.16.8.12\">2.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S7.T5.8.8.16.8.13\">3.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T5.8.8.16.8.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T5.8.8.16.8.14.1\">1.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T5.8.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T5.8.8.8.5\"># Parameters</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S7.T5.8.8.8.6\">924</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S7.T5.8.8.8.7\">3784</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T5.8.8.8.8\">4444</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S7.T5.8.8.8.9\">924</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S7.T5.8.8.8.10\">3784</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T5.8.8.8.11\">4444</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S7.T5.8.8.8.12\">924</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S7.T5.8.8.8.13\">3784</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T5.8.8.8.14\">4444</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T5.5.5.5.1\">\n432k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T5.6.6.6.2\">\n 341k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S7.T5.7.7.7.3\">\n283k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S7.T5.8.8.8.4\">\n341k</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Comparison of the performance of our MHSNs and the other state-of-the-art GNNs for potential energy prediction. We report the accuracy via MAE and RMSE as well as the number of trainable parameters in each network.</figcaption>\n</figure>",
117
+ "capture": "Table 5: Comparison of the performance of our MHSNs and the other state-of-the-art GNNs for potential energy prediction. We report the accuracy via MAE and RMSE as well as the number of trainable parameters in each network."
118
+ }
119
+ },
120
+ "image_paths": {
121
+ "1": {
122
+ "figure_path": "2311.10270v5_figure_1.png",
123
+ "caption": "Figure 1: In this small 2222-complex C\ud835\udc36Citalic_C, e1\u223ce4similar-tosubscript\ud835\udc521subscript\ud835\udc524e_{1}\\sim e_{4}italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u223c italic_e start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT because they share the face v2subscript\ud835\udc632v_{2}italic_v start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and e1\u223ce2similar-tosubscript\ud835\udc521subscript\ud835\udc522e_{1}\\sim e_{2}italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u223c italic_e start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT because they share the face v1subscript\ud835\udc631v_{1}italic_v start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Further e1\u2243e2similar-to-or-equalssubscript\ud835\udc521subscript\ud835\udc522e_{1}\\simeq e_{2}italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u2243 italic_e start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT because their hull t1\u2208Csubscript\ud835\udc611\ud835\udc36t_{1}\\in Citalic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u2208 italic_C, but e1\u2243\u0338e4not-similar-to-or-equalssubscript\ud835\udc521subscript\ud835\udc524e_{1}\\not\\simeq e_{4}italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u2243\u0338 italic_e start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT, so that e1\u2062\\stackunder[0pt]\u223c1\u2062e4subscript\ud835\udc521\\stackunder[0pt]\u223c1subscript\ud835\udc524e_{1}\\>\\>\\text{\\stackunder[0pt]{$\\sim$}{$\\scriptscriptstyle 1$}}\\>\\>e_{4}italic_e start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT [0pt] \u223c 1 italic_e start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT. We have t1\u223ct2similar-tosubscript\ud835\udc611subscript\ud835\udc612t_{1}\\sim t_{2}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT \u223c italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT because they share the face e3subscript\ud835\udc523e_{3}italic_e start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT, and also t1\u2062\\stackunder[0pt]\u223c2\u2062t2subscript\ud835\udc611\\stackunder[0pt]\u223c2subscript\ud835\udc612t_{1}\\>\\>\\text{\\stackunder[0pt]{$\\sim$}{$\\scriptscriptstyle 2$}}\\>\\>t_{2}italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT [0pt] \u223c 2 italic_t start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.",
124
+ "url": "http://arxiv.org/html/2311.10270v5/extracted/6029509/pics/two-triangle-example-clean.png"
125
+ }
126
+ },
127
+ "validation": true,
128
+ "references": [
129
+ {
130
+ "1": {
131
+ "title": "doi:10.1109/ICASSP49357.2023.10095803.",
132
+ "author": "C. Battiloro, P. Di Lorenzo, S. Barbarossa, Topological Slepians: Maximally localized representations of signals over simplicial complexes, in: ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023, pp. 1\u20135.",
133
+ "venue": null,
134
+ "url": "https://doi.org/10.1109/ICASSP49357.2023.10095803"
135
+ }
136
+ }
137
+ ],
138
+ "url": "http://arxiv.org/html/2311.10270v5"
139
+ }
20241127/2401.15479v4.json ADDED
@@ -0,0 +1,489 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Navigating the Post-API Dilemma Search Engine Results Pages Present a Biased View of Social Media Data",
3
+ "abstract": "Recent decisions to discontinue access to social media APIs are having detrimental effects on Internet research and the field of computational social science as a whole. This lack of access to data has been dubbed the Post-API era of Internet research. Fortunately, popular search engines have the means to crawl, capture, and surface social media data on their Search Engine Results Pages (SERP) if provided the proper search query, and may provide a solution to this dilemma. In the present work we ask: does SERP provide a complete and unbiased sample of social media data? Is SERP a viable alternative to direct API-access? To answer these questions, we perform a comparative analysis between (Google) SERP results and nonsampled data from Reddit and Twitter/X. We find that SERP results are highly biased in favor of popular posts; against political, pornographic, and vulgar posts; are more positive in their sentiment; and have large topical gaps. Overall, we conclude that SERP is not a viable alternative to social media API access.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In February 2023, Twitter/X announced its plan to discontinue free access to its API-services. Shortly thereafter, Reddit announced that it would take a similar action and likewise discontinue free API-access. In an interview with the New York Times, Steve Huffman, the CEO of Reddit, explained his rationale stating, \u201cThe Reddit corpus of data is really valuable, but we don\u2019t need to give all of that value to some of the largest companies in the world for free Isaac (2023 ###reference_b15###).\u201d The discontinuation of API access on both Twitter/X and Reddit led to a backlash from developers, especially those of the third-party applications, which rely heavily on API-access from these sites. Many of these third-party applications and their companies have since shuttered their operations. A similar effect has been felt among researchers and academics who rely on access to data for scholarship in countless areas of study.\n###figure_1### For example, scholars have been using access to Reddit\u2019s API-service almost since its founding in 2006. This data has led to several studies, especially in the field of discourse Botzer et al. (2022 ###reference_b5###), computational journalism Priya et al. (2019 ###reference_b33###), and computational linguistics Basile et al. (2021 ###reference_b2###); Wang and Luo (2021 ###reference_b40###); Melton et al. (2021 ###reference_b23###); Liu (2020 ###reference_b17###) to name a few. This is true even moreso for Twitter/X, which has seen numerous studies on follower networks Martha et al. (2013 ###reference_b20###); Yardi et al. (2010 ###reference_b42###), event detection Weng and Lee (2011 ###reference_b41###); Vieweg et al. (2010 ###reference_b39###); Hassan et al. (2020 ###reference_b14###), and coordinated influence campaignsPacheco et al. (2021 ###reference_b29###); Keller et al. (2020 ###reference_b16###). Without access to data, this type of scholarship will be difficult or impossible. We call this the Post-API Dilemma and in this paper we begin to ask the question: How shall scientists continue their work without access to this data?"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Data Collection Methodology",
15
+ "text": "To confidently study any biases in SERP, it is important to obtain strong unbiased social media datasets from which to compare. Specifically, we investigate Reddit and Twitter/X. In both cases we are confident that the collected samples are nearly complete within a specific time window."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Reddit Data",
21
+ "text": "We collected data from Pushshift222http://pushshift.io ###reference_ushshift.io### up until March of 2023. This data nearly complete, but may lack posts and comments that were either identified as spam by Reddit or deleted, edited, or removed by the moderation team or by the user before the Pushshift data collection service was able to collect the data, or was otherwise inaccessible by virtue of the post or comment being in a quarantined subreddit or otherwise. Nevertheless, it is likely that this dataset contains almost all the social media content that was visible to a regular user of Reddit. The number of up-/downvotes, awards, flairs, and other metadata associated with a post changes regularly; these changes are not reflected in this dataset. However, the present work mostly considers the text-content of the posts and comments so the ever-changing metadata is not relevant.\nWe focus our investigation on the posts and comments submitted to Reddit between January 1, 2023 and January 31, 2023. This timeframe was chosen because it is recent, complete, and large. In total, this subset contains 36,090,931 posts and 253,577,506 comments. We tokenized this dataset using Lucene\u2019s StandardAnalyzer, which removes whitespace, moves all text to lowercase, and removes the stopwords. In addition, we also removed any tokens that contained non-alphabetic characters, tokens with fewer than 3 characters and those with frequency less than 100. We then ranked each token according to its document frequency, and selected 1000 keywords by stratified sampling. Stratified sampling is used to ensure that the set of keywords are uniformly distributed from common to rare words, i.e., they are not dominated by words of one type or another."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Twitter/X",
27
+ "text": "Obtaining a complete set of Twitter/X data is difficult, even for a single day Pfeffer et al. (2023 ###reference_b32###). To make matters worse, new restrictions limit the sharing of data to only the identifiers, which do not contain the content of the post. Fortunately, there do exist Twitter/X datasets that are nearly complete for some subset of the platform. Specifically, we used a dataset of Tweets related to the COVID-19 pandemic, which was collected for one month starting on March 29, 2020, and ending on April 30, 2020 Smith (2020a ###reference_b35###, b ###reference_b36###). This dataset contains tweets that contain one of seven different hashtags, (e.g., #coronavirus, #coronavirusoutbreak, #covid19) for a total of 14,607,013 tweets during the time period."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "SERP Data",
33
+ "text": "Search engines like Bing and Google have the infrastructure to collect social media data at a massive scale. Researchers who rely on data access have been turning to services that provide relatively inexpensive SERP-API access. It is infeasible to simply ask SERP for a list of all tweets. So we used the SERP-API system to query Google with each of the 1000 random keywords extracted from the Reddit dataset.\nComparing against the Reddit dataset required each query to be of the form: site:reddit.com {keyword}. We found that the majority of queries were limited to 100 results each, so we repeated each query setting the date restriction for one day-at-a-time for each day in January 2023 thereby matching the timeframe from Reddit-dataset. All other options were kept at their Google-defaults except safe-search, which we disabled. Furthermore, the ScaleSERP service notes that they use thousands of proxies distributed throughout the world, so the results presented in the current study should not be biased by geographical region.\nComparing against the Twitter/X dataset used a similar methodology, except the queries needed to also include one of the hashtags used to obtain the Twitter data in order to maintain a fair comparison. The Twitter query to SERP was of the form: site:twitter.com {hashtag} {keyword} for each keyword. We use the same 1000 keywords as the used to query Reddit. Because many tweets contained more than one of the relevant hashtags we randomly sampled a single covid-hashtag from the list of seven for each keyword. We were again careful to match the dates from the Twitter/X dataset. Like in the Reddit-SERP methodology, all other options were kept at their Google-defaults except safe-search, which we again disabled."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Data Models",
39
+ "text": "Relative to the enormous size of the nearly-complete Reddit and Twitter/X datasets, the results gathered from SERP were surprisingly small. In total SERP gathered 1,296,958 results from Reddit and 70,018 tweets from Twitter/X. Note that the results for Reddit are typically links to entire comment threads, but SERP results for Twitter/X are typically link to a single Tweet.\nData for Reddit includes the post/comment-id, userid, score, post-title, post-text, and all comments on the post. Results for Twitter/X only contain the userid, Tweet-id, and the Tweet-content. With this data, it is possible to perform a comparison of the data gleaned from SERP against the data known to exist on Reddit and Twitter/X.\n###figure_2###"
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Popularity Analysis",
45
+ "text": "We begin with an analysis that characterizes any relationship between the popularity of a user or the score of a post and the means scores from SERP. Search engines are well known to promote highly authoritative sources Sundin et al. (2022 ###reference_b37###), and this may (or may not) be true for the results for social media searches. We expect that high-scoring Reddit posts and Tweets from highly influential Twitter/X users will dominate the SERP results, thereby introducing a bias in the overall search results.\nTo characterize any popularity bias induced by SERP, we compared the number of followers of the posting user from Twitter/X and the score of the post from Reddit. Overall, the mean and median post-score in our nonsampled Reddit dataset was 48.97 and 1.0 respectively compared to 550.69 and 21.0 from SERP; the mean and median number of followers in our nonsampled Twitter/X dataset was 63,250.16 and 873.0 respectively compared to 544,934.63 and 21,547.0 from SERP. Therefore, it does appear that SERP returns posts that are statistically significantly higher than the typical Reddit post (MannWhitney =0.259 p\u00a10.001) and the typical active Twitter/X user (MannWhitney =0.194 p\u00a10.001).\nWe also compared the popularity score as a function of the SERP rank.In both cases, Spearman correlation tests showed almost no correlation between a Twitter/X user\u2019s follower count and its rank from SERP (=0.002); and almost no correlation between the score of the Reddit post and its rank from SERP (=0.001)."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Keyword-based Comparison",
51
+ "text": "Next we look to identify keyword-level discrepancies between the datasets. Typical token-based analysis takes the view that the text-datasets can be represented as a bag-of-words. Then, any number of statistical analysis can be employed to compare these categorical distributions Cha (2007 ###reference_b6###); Deza and Deza (2006 ###reference_b7###). But these traditional distances are difficult to interpret when the data is Zipfian Gerlach et al. (2016 ###reference_b10###), as most text-data is Miller (1951 ###reference_b25###). Recently, the Rank Turbulence Divergence (RTD) metric was introduced as an illuminating and information-theoretic measure of the difference between two text-corpora Dodds et al. (2023 ###reference_b8###). We employ this measure to identify any token-level sample bias from SERP.\n###figure_3### Formally, let R1 and R2 be two word distributions ranked from most common to least common. To start, the RTD calculates the element-wise divergence as follows:\nwhere represents a token and and denote its ranks within R1 and R2, respectively. Because Eq. 1 ###reference_### introduces bias towards higher ranks, the authors introduce a control parameter as:\nFor each token present in the combined domain of R1 and R2, we compute their divergence using Eq. 2 ###reference_###. In the present work, we use , which has been shown in previous work to deliver a reasonably balanced list of words with ranks from across the common-to-rare spectrum Dodds et al. (2023 ###reference_b8###).\nThe final RTD is a comparison of R1 and R2 summed over all element-level divergence. It includes a normalization prefactor and takes the following form.\nA lower score indicates little rank divergence; a higher score indicates a larger divergence. The mean RTD comparing SERP results and the non-sampled social media data from all 1000 keywords is listed in Tab. 1 ###reference_###. Raw numbers are difficult to interpret on their own, but a control test performed in the next section found an RTD of about 0.30 on random comparisons of the same dataset. The RTD comparing SERP results for Reddit and Twitter/X were both dramatically higher than the control.\nOverall, this domain-level analysis shows that SERP results are substantially different from each other. Next, our goal is to characterize the nature of this difference."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Token-level Analysis",
57
+ "text": "Recall that the corpus-level RTD values presented above from Eq. 3 ###reference_### are the mean average of the rank divergence of the individual words from Eq. 2 ###reference_###. This permits a token-level analysis to find the words/tokens that diverge the most within the dataset for each keyword. We do this by capturing the output and the sign from Eq. 2 ###reference_### i.e., disregarding the absolute value function, for each token in the posts/Tweets returned by SERP or returned by a keyword-query to the nonsampled social media data. Figure 2 ###reference_### shows the distribution of the token-level divergences (Eq. 2 ###reference_###) and their mean (representing Eq. 3 ###reference_###) for terms that have the highest mean rank divergence in favor of Google\u2019s SERP and in favor of the nonsampled social media data from Twitter/X (on left) and Reddit (on right). In other words, the terms in red (i.e., top subplots) are more likely to be returned from the Google\u2019s SERP compared to the nonsampled social media data, and vice versa.\nOn the nonsampled Twitter/X data we are far more likely to encounter hashtag-style terms, along with politically salient terms referencing then-President Trump, terms blaming China for the pandemic, and other terms with a generally negative sentiment. The results from SERP, in contrast, illustrate medical information (hospital, health, services, support), government offices (city, minister, socialsecurity, facility, district), and terms with a more-positive (or at least more-neutral) tone.\nFrom the Reddit data we find that Reddit-specific terms like removed, comments, deleted, unsubscribe, etc are far more likely to appear on Reddit compared to SERP, which is reasonable, but also means that the search engine is more likely to hold-back these items from the search results. We also find that the nonsampled Reddit data is far more likely to have terms from American football (Romo, Mahomes, Bengals, Jags, refs), which was in its playoffs during the data collection period, political terms (Trump, Ukraine, McCarthy, neoliberal, republican, vote), and vulgarity.\n###figure_4###"
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Term-Frequency Analysis",
63
+ "text": "Although a targeted social media analysis might intentionally develop a list of query words, like the COVID hashtags used to collect the Twitter / X data, recall that the keywords used in our token-level analysis were selected from a stratified sample of all terms ordered by their document frequency. Here, we ask whether there is a relationship between the frequency of a keyword (as measured by the document frequency) and its RTD.\nWe compute the RTD values comparing SERP results against the nonsampled social media data for each of the 1000 keywords. We compare these values against an RTD control set created by randomly assigning 5000 random posts from the nonsampled social media data for each keyword.\nCompared to the control set, Fig. 3 ###reference_### shows that the RTD is consistently higher on the Twitter/X dataset. On the Reddit dataset, we find that the RTD starts out relatively low for the most common words, but then rises substantially for more-informative words with medium document frequencies, and then reverts to the control for the less common words.\nTogether, these results indicate that Google\u2019s SERP returns a highly skewed view of the underlying social media data, and that the difference is most pronounced in terms that are most informative.\n###figure_5###"
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Sentiment Analysis",
69
+ "text": "Social media has also been widely used to glean information regarding sentiment and emotional judgement regarding various topics Liu (2020 ###reference_b17###). Although we do not investigate any single-topic or event in the present work, we do make use of sentiment analysis tools to determine if SERP produces and bias in terms of sentiment or emotionally-salient language. We do this at the post-level and employ a sentiment analysis model called TimeLMs Loureiro et al. (2022 ###reference_b19###) based on the roberta Liu et al. (2019 ###reference_b18###) transformer architecture. Although this model was originally finetuned and evaluated primarily on Twitter/X data, we capitalize on its capability to grasp the universal characteristics of social media language, as highlighted in previous studies Guo et al. (2020 ###reference_b13###), thereby allowing us to use it as foundational model for sentiment analysis on both Twitter and Reddit.\nThe findings from the term-level analysis also appeared to have a difference in the overall sentiment and emotional salience. Simply put, Google\u2019s SERP appeared to return social media posts that were much more positive compared to the nonsampled social media data.\nSentiment analysis on social media has been used for decades to gauge the users\u2019 attitudes towards products, organizations, issues and their various facets Mei et al. (2007 ###reference_b22###). Analysis of sentiment has become one of the widely researched areas in the recent times, and many large organizations have entire social media teams dedicated to managing their social media presence. In the Post-API era, it is important to understand if SERP provides an accurate characterization of the true distribution of sentiment found on social media Trezza (2023 ###reference_b38###).\nSentiment analysis tools can be deployed on various levels including sentence-level Farra et al. (2010 ###reference_b9###), document level Yessenalina et al. (2010 ###reference_b43###), and aspect level Nikos et al. (2011 ###reference_b28###) analysis. For this task we used a sentiment analysis model based on the Roberta transformer model Liu et al. (2019 ###reference_b18###) that was fine-tuned on Twitter data. The sentiment analysis model was applied to each Reddit post and Tweet; it returned a probability that each post was neutral, positive, or negative.\nThe mean-average of the sentiment probabilities and their 95% confidence intervals are plotted in Fig. 5 ###reference_###. For Reddit, we find that the posts returned by SERP were statistically more-positive than the nonsampled Reddit data () and vice versa. In case of Twitter, we observed a large difference in the negative sentiment, and a small, but statistically significant decrease in positive sentiment. However, we assess that the large shift from negative to neutral outweighs the positive to neutral shift resulting in an overall more-positive posts retrieved from SERP. We used the one-sample two-tailed Student T-test to determine statistical significance in both cases."
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "Gaps in Topical Coverage",
75
+ "text": "Our final analysis investigates the topical coverage of the data. If SERP returned an unbiased sample of the underlying social media data, then we would expect that the topical coverage of the results returned from SERP would have a similar topical distribution of the nonsampled social media data.\nTopical analysis on text data is a well-understood and deeply investigated mode of analysis starting from latent semantic indexing (LSI) and latent Dirichlet allocation (LDA) models in the past Blei et al. (2003 ###reference_b4###); Blei (2012 ###reference_b3###) to learned vector representation models such as word2vec Mikolov et al. (2013 ###reference_b24###) and GLOVE Pennington et al. (2014 ###reference_b31###). But these term-based models do not provide a contextual understanding of sentence-level semantics found in social media posts. Sentence Transformers, on the other hand, provide contextual whole-sentence embeddings of entire sentences or paragraphs Reimers and Gurevych (2019 ###reference_b34###). Therefore, sentence transformers are a powerful tool for tasks such as sentence similarity, clustering, and retrieval.\nIn order to compare the topical coverage of SERP results against the nonsampled social media data, we used a pretrained multilingual sentence transformer model called paraphrase-multilingual-mp net-base-v2 Reimers and Gurevych (2019 ###reference_b34###) to encode each social media post and Tweet, from both the nonsampled social media data and from SERP, into a 768-dimensional vector space. Because this pretrained transformer model was not fine-tuned on any of the datasets, it will encode the posts from all of the datasets into the same high-dimensional semantic space. We then used UMAP to project the high-dimensional embeddings into a shared two-dimensional projection McInnes et al. (2018 ###reference_b21###).\nThe resulting plots are illustrated in Figs. 4 ###reference_### and 6 ###reference_### for Reddit and Twitter/X, respectively. A complete, interactive visualization of these plots is available here ###reference_t.ly/PByYo### (viewer discretion is advised). In both cases, the nonsampled social media data from Reddit and Twitter/X is plotted in blue; the results from SERP are plotted in Red and always in front of the nonsampled social media plots. Because the results from SERP are a subset of the nonsampled social media data, the points visible in red usually elide and therefore match the same post from the nonsampled social media data. As a result, the points visible in blue indicate gaps in topical coverage in SERP results.\nTopical gaps in Reddit coverage are illustrated as blue points in Fig. 4 ###reference_###. We identified several topical-areas where Reddit data was not covered by results from SERP; five of these areas are selected and a representative sample of the social media post. One exception is in the top-most cluster, which contained mostly pornographic posts; this particular example was deliberately chosen to sanitize the illustration from highly graphic and sexually explicit language, which make up the majority of this cluster. Overall we find that SERP generally censors Reddit posts that are pornographic, spam, highly-political, and contain moderation messages.\nWe find similar coverage gaps on Twitter/X illustrated in Fig. 6 ###reference_###. Several topical gaps are evident; we focus on five clusters with representative Tweets illustrated on the right. Perhaps the tightest cluster in this illustration focus on the hashtag #ChinaLiedPeopleDied; another focuses on negative political aspects of then-President Trump. Generally, the coverage gaps appear to highly align with the sentiment analysis from the previous section and can be broadly characterized as focusing on negative content, while SERP results tend to focus on healthcare-related content.\n###figure_6###"
76
+ },
77
+ {
78
+ "section_id": "7",
79
+ "parent_section_id": null,
80
+ "section_name": "Discussion",
81
+ "text": "The results of this study, overall four dimensions of analysis,\nclearly show that SERP results are highly biased samples of the social media data. Although this is not unexpected, the nature of these differences were surprising. This analysis is an early step in understanding the tradeoffs that result in the use of SERP results as a replacement for API access.\nWe summarize the results as follows: (1) We found that SERP results return posts from Twitter/X users that have a dramatically larger following than the average Twitter user; likewise, for Reddit we find that that SERP results return posts that have a dramatically higher score than the average Reddit post. Unexpectedly, we did not find any correlation between user popularity or post score and its rank in the SERP results. (2) Token-level analysis found a substantive difference in the likelihood of various terms appearing in posts returned by SERP. SERP results appeared to be less political, less vulgar, and, on the COVID-oriented Twitter/X dataset, far more likely to mention social and health services. (3) The token-level analysis appeared to show that SERP results were generally more positive than the nonsampled social data. Indeed a full-scale sentiment analysis showed that SERP results tended to be statistically more-positive than the average social media post. (4) Finally, maps of topical coverage indicated vast swaths of the semantic space were missing from the SERP results. Further investigation found that pornographic, vulgar, political, and spam posts were largely absent from SERP results."
82
+ },
83
+ {
84
+ "section_id": "7.1",
85
+ "parent_section_id": "7",
86
+ "section_name": "Cost Analysis",
87
+ "text": "At present, nearly every social media platform either charges for API access, severely limits access, or does not provide API access at all. The nonsampled data used in the present work was collected prior to API access being put into place. Nevertheless, we wish to provide an analysis of what it would cost to collect the nonsampled data at present rates. For this we considered three sources: Reddit API, Twitter API, and ScaleSERP API. The Reddit API, which charges 24 cents per 1,000 API requests, would cost 240 USD to obtain 1 million posts. The Twitter API comes at a significantly higher price, with a cost of 5,000 USD for 1 million posts per month. ScaleSERP uses a slightly different payment model; it is important to note that ScaleSERP generally provides a maximum of 100 results (if available) per call. As a result, approximately 10,000 queries are required to retrieve 1 million posts. The cost for 10,000 API requests from the ScaleSERP is 59 USD. Using SERP is clearly a cost-conscious decision, but does come with a price of a highly biased sample."
88
+ },
89
+ {
90
+ "section_id": "7.2",
91
+ "parent_section_id": "7",
92
+ "section_name": "Threats to Validity",
93
+ "text": "This present work is not without limitations. To begin, the initial assumption of our analysis is that the data gathered from Reddit and Twitter/X are (almost) complete. Although most social media posts grow stale after a few days Glenski et al. (2017 ###reference_b12###), any user may comment, retweet, like, or upvote any post at anytime; as a result this data may not account for social activity that occurred after the time of capture. In a similar vein, although the the Twitter/X dataset is (almost) complete for the 7 COVID hashtags, these hashtags certainly do not comprehensively encompass all of the discussions surrounding COVID. We ensure, however, that our SERP queries included the same hashtags along with the keyword, minimizing the likelihood of search bias or an incompleteness bias.\nMoreover, given the severity and the impact of the topic surrounding COVID19, search engines may have placed stricter moderation policies on this topic, thereby challenging our study\u2019s findings. However, our findings from Fig 2 ###reference_### indicate that Reddit data from SERP had similar more-positive and more-authoritative terms than the full Reddit dataset as in the case with Twitter/X dataset. We believe this finding should serve to abate questions of any extra-moderation in the COVID-sample.\nAnother notable limitation stems from the non-deterministic nature of SERP results. The data retrieved from SERP may or may not appear at the same rank (or at all) when re-queried. This could impact our analysis, particularly the rank-based correlation results in the analysis of popularity. Given that we query SERP with 1,000 keywords from common-to-rare in terms of frequency, the samples collected should still provide valuable insights for our study.\nIn addition, we believe this study can serve as a pathway for several interesting follow-up experiments: analysis focusing on the subreddits, and a study on the moderation policies of search engines. These, and other studies, could provide a more holistic understanding of the incompleteness of search engine data and provide researchers with deeper understanding of its suitability as a replacement for direct access to social media data."
94
+ },
95
+ {
96
+ "section_id": "7.3",
97
+ "parent_section_id": "7",
98
+ "section_name": "Conclusions",
99
+ "text": "Taken together, these findings collectively point to a large bias in SERP results. This raises the question on the validity of any research that is performed with data collected from SERP only; however, we currently know of none. Overall, we conclude that SERP is probably not a viable alternative to direct access to social media data.\nFuture research that heavily relies on SERP results may provide value, but it is important that these future works are cognizant of the limitations and biases in SERP results and are careful not to make conclusions that do not rely on an unbiased sample of social media data. However, it is also important to highlight the cases where SERP results can serve as a trustworthy data source. For example, studies which study search engines can make natural use of SERP results; likewise, SERP may be used as a seed set for additional analysis.\nAlthough the present work answers many questions, it raises others as well. We are additionally interested in the differences between SERP results and Web-scraped results. For example, it could be the case that SERP results are actually a unbiased sample of the results that social media platforms provide to the search engine\u2019s scraper; it is entirely possible the data bias that we attribute to the search engine algorithm is actually the result of the data that the social media sites provide to the scraper."
100
+ }
101
+ ],
102
+ "appendix": [],
103
+ "tables": {
104
+ "1": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Rank Turbulence Divergence (RTD) between SERP results and the nonsampled social media data.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.1\">Site</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.2.1\">RTD</span> (SERP vs Social Media)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.2.1.1\">Reddit</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.2.1.2\">0.47</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.1.3.2.1\">Twitter</th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.1.3.2.2\">0.70</td>\n</tr>\n</tbody>\n</table>\n</figure>",
106
+ "capture": "Table 1: Rank Turbulence Divergence (RTD) between SERP results and the nonsampled social media data."
107
+ }
108
+ },
109
+ "image_paths": {
110
+ "1": {
111
+ "figure_path": "2401.15479v4_figure_1.png",
112
+ "caption": "Figure 1: Problem definition. We ask: does Google-SERP provide an unbiased sample of social media data? Is Google-SERP a valid replacement for social media APIs?",
113
+ "url": "http://arxiv.org/html/2401.15479v4/extracted/6029659/images/googleparis.jpg"
114
+ },
115
+ "2": {
116
+ "figure_path": "2401.15479v4_figure_2.png",
117
+ "caption": "Figure 2: Signed Rank Turbulence Divergence (RTD) for the most divergent terms comparing results from SERP against Twitter/X (on left) and against Reddit (on right). Terms that are more likely to appear in SERP results are listed on top (red). Terms that are more likely to appear in the nonsampled social media data are listed on the bottom (blue). We find that the social media posts returned by SERP are far more likely to contain innocuous terms compared to the nonsampled social media data.",
118
+ "url": "http://arxiv.org/html/2401.15479v4/extracted/6029659/images/umap/rtd.png"
119
+ },
120
+ "3": {
121
+ "figure_path": "2401.15479v4_figure_3.png",
122
+ "caption": "Figure 3: Rank Turbulence Divergence (RTD) as a function of Document Frequency. Common terms and rare terms shared between SERP results diverge from the substantially nonsampled social media data.",
123
+ "url": "http://arxiv.org/html/2401.15479v4/extracted/6029659/images/umap/rtdfreq.png"
124
+ },
125
+ "4": {
126
+ "figure_path": "2401.15479v4_figure_4.png",
127
+ "caption": "Figure 4: Topical coverage map of the nonsampled Reddit data (blue) and posts returned from SERP (red). Clusters of blue show topical clusters that are found in the nonsampled Reddit data that are not returned by SERP. Examples of some of the clusters are listed on the right.",
128
+ "url": "http://arxiv.org/html/2401.15479v4/x1.png"
129
+ },
130
+ "5": {
131
+ "figure_path": "2401.15479v4_figure_5.png",
132
+ "caption": "Figure 5: Sentiment Probabilities and their 95% confidence intervals. SERP results were statistically more-positive or more-neutral than the nonsampled social media data.",
133
+ "url": "http://arxiv.org/html/2401.15479v4/extracted/6029659/images/umap/sentiment.png"
134
+ },
135
+ "6": {
136
+ "figure_path": "2401.15479v4_figure_6.png",
137
+ "caption": "Figure 6: Topical coverage map of the nonsampled Twitter/X data (blue) and Tweets returned from SERP (red). Clusters of blue show topical clusters that are found in the nonsampled Twitter data that are not returned by SERP. Examples of some of the clusters are listed on the right.",
138
+ "url": "http://arxiv.org/html/2401.15479v4/x2.png"
139
+ }
140
+ },
141
+ "validation": true,
142
+ "references": [
143
+ {
144
+ "1": {
145
+ "title": "How dramatic events can affect emotionality in social posting: The impact of COVID-19 on Reddit.",
146
+ "author": "Valerio Basile, Francesco Cauteruccio, and Giorgio Terracina. 2021.",
147
+ "venue": "Future Internet 13, 2 (2021), 29.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "2": {
153
+ "title": "Probabilistic topic models.",
154
+ "author": "David M Blei. 2012.",
155
+ "venue": "Commun. ACM 55, 4 (2012), 77\u201384.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "3": {
161
+ "title": "Latent dirichlet allocation.",
162
+ "author": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003.",
163
+ "venue": "Journal of machine Learning research 3, Jan (2003), 993\u20131022.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "4": {
169
+ "title": "Analysis of moral judgment on reddit.",
170
+ "author": "Nicholas Botzer, Shawn Gu, and Tim Weninger. 2022.",
171
+ "venue": "IEEE Transactions on Computational Social Systems (2022).",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "5": {
177
+ "title": "Comprehensive survey on distance/similarity measures between probability density functions.",
178
+ "author": "Sung-Hyuk Cha. 2007.",
179
+ "venue": "City 1, 2 (2007), 1.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "6": {
185
+ "title": "Dictionary of distances.",
186
+ "author": "Michel-Marie Deza and Elena Deza. 2006.",
187
+ "venue": "Elsevier.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "7": {
193
+ "title": "Allotaxonometry and rank-turbulence divergence: A universal instrument for comparing complex systems.",
194
+ "author": "Peter Sheridan Dodds, Joshua R Minot, Michael V Arnold, Thayer Alshaabi, Jane Lydia Adams, David Rushing Dewhurst, Tyler J Gray, Morgan R Frank, Andrew J Reagan, and Christopher M Danforth. 2023.",
195
+ "venue": "EPJ Data Science 12, 1 (2023), 37.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "8": {
201
+ "title": "Sentence-level and document-level sentiment mining for arabic texts. In 2010 IEEE international conference on data mining workshops. IEEE, 1114\u20131119.",
202
+ "author": "Noura Farra, Elie Challita, Rawad Abou Assi, and Hazem Hajj. 2010.",
203
+ "venue": "",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "9": {
209
+ "title": "Similarity of symbol frequency distributions with heavy tails.",
210
+ "author": "Martin Gerlach, Francesc Font-Clos, and Eduardo G Altmann. 2016.",
211
+ "venue": "Physical Review X 6, 2 (2016), 021009.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "10": {
217
+ "title": "Content moderation, AI, and the question of scale.",
218
+ "author": "Tarleton Gillespie. 2020.",
219
+ "venue": "Big Data & Society 7, 2 (2020), 2053951720943234.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "11": {
225
+ "title": "Consumers and curators: Browsing and voting patterns on reddit.",
226
+ "author": "Maria Glenski, Corey Pennycuff, and Tim Weninger. 2017.",
227
+ "venue": "IEEE Transactions on Computational Social Systems 4, 4 (2017), 196\u2013206.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "12": {
233
+ "title": "Benchmarking of transformer-based pre-trained models on social media text classification datasets. In Proceedings of the the 18th annual workshop of the australasian language technology association. 86\u201391.",
234
+ "author": "Yuting Guo, Xiangjue Dong, Mohammed Ali Al-Garadi, Abeed Sarker, Cecile Paris, and Diego Moll\u00e1 Aliod. 2020.",
235
+ "venue": "",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "13": {
241
+ "title": "Towards automated sexual violence report tracking. In Proceedings of the international AAAI conference on web and social media, Vol. 14. 250\u2013259.",
242
+ "author": "Naeemul Hassan, Amrit Poudel, Jason Hale, Claire Hubacek, Khandaker Tasnim Huq, Shubhra Kanti Karmaker Santu, and Syed Ishtiaque Ahmed. 2020.",
243
+ "venue": "",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "14": {
249
+ "title": "\u201dReddit Wants to Get Paid for Helping to Teach Big A.I. Systems\u201c. The New York Times\u201d.",
250
+ "author": "Mike Isaac. 2023.",
251
+ "venue": "",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "15": {
257
+ "title": "Political astroturfing on twitter: How to coordinate a disinformation campaign.",
258
+ "author": "Franziska B Keller, David Schoch, Sebastian Stier, and JungHwan Yang. 2020.",
259
+ "venue": "Political communication 37, 2 (2020), 256\u2013280.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "16": {
265
+ "title": "Sentiment analysis: Mining opinions, sentiments, and emotions.",
266
+ "author": "Bing Liu. 2020.",
267
+ "venue": "Cambridge university press.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "17": {
273
+ "title": "Roberta: A robustly optimized bert pretraining approach.",
274
+ "author": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.",
275
+ "venue": "arXiv preprint arXiv:1907.11692 (2019).",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "18": {
281
+ "title": "Timelms: Diachronic language models from twitter.",
282
+ "author": "Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose Camacho-Collados. 2022.",
283
+ "venue": "arXiv preprint arXiv:2202.03829 (2022).",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "19": {
289
+ "title": "A study on Twitter user-follower network: A network based analysis. In Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. 1405\u20131409.",
290
+ "author": "VenkataSwamy Martha, Weizhong Zhao, and Xiaowei Xu. 2013.",
291
+ "venue": "",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "20": {
297
+ "title": "Umap: Uniform manifold approximation and projection for dimension reduction.",
298
+ "author": "Leland McInnes, John Healy, and James Melville. 2018.",
299
+ "venue": "arXiv preprint arXiv:1802.03426 (2018).",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "21": {
305
+ "title": "Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of the 16th international conference on World Wide Web. 171\u2013180.",
306
+ "author": "Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007.",
307
+ "venue": "",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "22": {
313
+ "title": "Public sentiment analysis and topic modeling regarding COVID-19 vaccines on the Reddit social media platform: A call to action for strengthening vaccine confidence.",
314
+ "author": "Chad A Melton, Olufunto A Olusanya, Nariman Ammar, and Arash Shaban-Nejad. 2021.",
315
+ "venue": "Journal of Infection and Public Health 14, 10 (2021), 1505\u20131512.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "23": {
321
+ "title": "Efficient estimation of word representations in vector space.",
322
+ "author": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013.",
323
+ "venue": "arXiv preprint arXiv:1301.3781 (2013).",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "24": {
329
+ "title": "Language and communication.",
330
+ "author": "George Armitage Miller. 1951.",
331
+ "venue": "(1951).",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "25": {
337
+ "title": "Is the sample good enough? comparing data from twitter\u2019s streaming api with twitter\u2019s firehose. In Proceedings of the international AAAI conference on web and social media, Vol. 7. 400\u2013408.",
338
+ "author": "Fred Morstatter, J\u00fcrgen Pfeffer, Huan Liu, and Kathleen Carley. 2013.",
339
+ "venue": "",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "26": {
345
+ "title": "Using twitter to examine smoking behavior and perceptions of emerging tobacco products.",
346
+ "author": "Mark Mysl\u00edn, Shu-Hong Zhu, Wendy Chapman, Mike Conway, et al. 2013.",
347
+ "venue": "Journal of medical Internet research 15, 8 (2013), e2534.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "27": {
353
+ "title": "ELS: A word-level method for entity-level analysis. In WIMS 2011 Proceedings of the International Conference on Web Intelligence, Mining and Semantics.",
354
+ "author": "E Nikos, L Angeliki, P Georgios, and C Konstantinos. 2011.",
355
+ "venue": "",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "28": {
361
+ "title": "Uncovering coordinated networks on social media: methods and case studies. In Proceedings of the international AAAI conference on web and social media, Vol. 15. 455\u2013466.",
362
+ "author": "Diogo Pacheco, Pik-Mai Hui, Christopher Torres-Lugo, Bao Tran Truong, Alessandro Flammini, and Filippo Menczer. 2021.",
363
+ "venue": "",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "29": {
369
+ "title": "Who do you think you are? Common and differential effects of social self-identity on social media usage.",
370
+ "author": "Zhao Pan, Yaobin Lu, Bin Wang, and Patrick YK Chau. 2017.",
371
+ "venue": "Journal of Management Information Systems 34, 1 (2017), 71\u2013101.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "30": {
377
+ "title": "Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 1532\u20131543.",
378
+ "author": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014.",
379
+ "venue": "",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "31": {
385
+ "title": "Just another day on Twitter: a complete 24 hours of Twitter data. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 17. 1073\u20131081.",
386
+ "author": "Juergen Pfeffer, Daniel Matter, Kokil Jaidka, Onur Varol, Afra Mashhadi, Jana Lasser, Dennis Assenmacher, Siqi Wu, Diyi Yang, Cornelia Brantner, et al. 2023.",
387
+ "venue": "",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "32": {
393
+ "title": "Where should one get news updates: Twitter or Reddit.",
394
+ "author": "Shalini Priya, Ryan Sequeira, Joydeep Chandra, and Sourav Kumar Dandapat. 2019.",
395
+ "venue": "Online Social Networks and Media 9 (2019), 17\u201329.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "33": {
401
+ "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.",
402
+ "author": "Nils Reimers and Iryna Gurevych. 2019.",
403
+ "venue": "https://arxiv.org/abs/1908.10084",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "34": {
409
+ "title": "Coronavirus (COVID19) tweets - early April.",
410
+ "author": "S. Smith. 2020a.",
411
+ "venue": "https://www.kaggle.com/datasets/smid80/coronavirus-covid19-tweets-early-april.",
412
+ "url": null
413
+ }
414
+ },
415
+ {
416
+ "35": {
417
+ "title": "Coronavirus (COVID19) tweets - late April.",
418
+ "author": "S. Smith. 2020b.",
419
+ "venue": "https://www.kaggle.com/datasets/smid80/coronavirus-covid19-tweets-late-april.",
420
+ "url": null
421
+ }
422
+ },
423
+ {
424
+ "36": {
425
+ "title": "Whose relevance? Web search engines as multisided relevance machines.",
426
+ "author": "Olof Sundin, Dirk Lewandowski, and Jutta Haider. 2022.",
427
+ "venue": "Journal of the Association for Information Science and Technology 73, 5 (2022), 637\u2013642.",
428
+ "url": null
429
+ }
430
+ },
431
+ {
432
+ "37": {
433
+ "title": "To scrape or not to scrape, this is dilemma. The post-API scenario and implications on digital research.",
434
+ "author": "Domenico Trezza. 2023.",
435
+ "venue": "Frontiers in Sociology 8 (2023), 1145038.",
436
+ "url": null
437
+ }
438
+ },
439
+ {
440
+ "38": {
441
+ "title": "Microblogging during two natural hazards events: what twitter may contribute to situational awareness. In Proceedings of the SIGCHI conference on human factors in computing systems. 1079\u20131088.",
442
+ "author": "Sarah Vieweg, Amanda L Hughes, Kate Starbird, and Leysia Palen. 2010.",
443
+ "venue": "",
444
+ "url": null
445
+ }
446
+ },
447
+ {
448
+ "39": {
449
+ "title": "Predicting $ gme stock price movement using sentiment from reddit r/wallstreetbets. In Proceedings of the Third Workshop on Financial Technology and Natural Language Processing. 22\u201330.",
450
+ "author": "Charlie Wang and Ben Luo. 2021.",
451
+ "venue": "",
452
+ "url": null
453
+ }
454
+ },
455
+ {
456
+ "40": {
457
+ "title": "Event detection in twitter. In Proceedings of the international aaai conference on web and social media, Vol. 5. 401\u2013408.",
458
+ "author": "Jianshu Weng and Bu-Sung Lee. 2011.",
459
+ "venue": "",
460
+ "url": null
461
+ }
462
+ },
463
+ {
464
+ "41": {
465
+ "title": "Detecting spam in a twitter network.",
466
+ "author": "Sarita Yardi, Daniel Romero, Grant Schoenebeck, et al. 2010.",
467
+ "venue": "First monday (2010).",
468
+ "url": null
469
+ }
470
+ },
471
+ {
472
+ "42": {
473
+ "title": "Multi-level structured models for document-level sentiment classification. In Proceedings of the 2010 conference on empirical methods in natural language processing. 1046\u20131056.",
474
+ "author": "Ainur Yessenalina, Yisong Yue, and Claire Cardie. 2010.",
475
+ "venue": "",
476
+ "url": null
477
+ }
478
+ },
479
+ {
480
+ "43": {
481
+ "title": "Extrapolating psychological insights from Facebook profiles: A study of religion and relationship status.",
482
+ "author": "Sean Young, Debo Dutta, and Gopal Dommety. 2009.",
483
+ "venue": "CyberPsychology & Behavior 12, 3 (2009), 347\u2013350.",
484
+ "url": null
485
+ }
486
+ }
487
+ ],
488
+ "url": "http://arxiv.org/html/2401.15479v4"
489
+ }
20241127/2402.14244v2.json ADDED
@@ -0,0 +1,316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "MENTOR: Guiding Hierarchical Reinforcement Learning with Human Feedback and Dynamic Distance Constraint",
3
+ "abstract": "Hierarchical reinforcement learning (HRL) provides a promising solution for complex tasks with sparse rewards of agents, which uses a hierarchical framework that divides tasks into subgoals and completes them sequentially. However, current methods struggle to find suitable subgoals for ensuring a stable learning process. To address the issue, we propose a general hierarchical reinforcement learning framework incorporating human feedback and dynamic distance constraints, termed MENTOR, which acts as a \u201cmentor\u201d. Specifically, human feedback is incorporated into high-level policy learning to find better subgoals. Furthermore, we propose the Dynamic Distance Constraint (DDC) mechanism dynamically adjusting the space of optional subgoals, such that MENTOR can generate subgoals matching the low-level policy learning process from easy to hard. As a result, the learning efficiency can be improved. As for low-level policy, a dual policy is designed for exploration-exploitation decoupling to stabilize the training process. Extensive experiments demonstrate that MENTOR uses a small amount of human feedback to achieve significant improvement in complex tasks with sparse rewards. Further details and code implementations can be found at https://github.com/nidesuipao/MENTOR.git.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The problem of sparse reward is consistently challenging in the domain of reinforcement learning (RL) [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###], attributing to two main factors: challenging exploration, and unstable training. In recent years, several approaches have been proposed to relieve these issues, including goal-conditional reinforcement learning [4 ###reference_b4###], curiosity-driven exploration [5 ###reference_b5###, 6 ###reference_b6###] and hierarchical reinforcement learning (HRL) [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###].\nHierarchical Reinforcement Learning (HRL) is effective for long-horizon tasks with sparse rewards, as it decomposes tasks into more manageable subgoals, mitigating challenges related to exploration and unstable training. However, there are two primary challenges in its practical applications.\n1) Generating effective subgoals. To create efficient subgoals that guide low-level policies, manual design [11 ###reference_b11###, 12 ###reference_b12###] and automatic generation methods [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 9 ###reference_b9###] have been proposed. However, manual design is resource-intensive and struggles with complex tasks [16 ###reference_b16###], while automatic generation demands significant computational resources to explore the entire state space [17 ###reference_b17###].\n2) Efficient subgoal completion. During low-level learning, hindsight relabeling [4 ###reference_b4###] adjusts subgoals to turn failed transitions into successes but lacks the capacity for effective exploration. Curiosity-driven techniques, such as Random Network Distillation (RND), prevent revisiting states [18 ###reference_b18###] but can destabilize training due to reward bonuses associated with exploration, particularly in sparse reward settings. Frequent failures in low-level subgoal achievement can lead to non-stationarity at the high level. While some studies [7 ###reference_b7###, 8 ###reference_b8###] address these issues through hindsight mechanisms, e.g., penalizing high-level policies [7 ###reference_b7###], they do not fully resolve the problem of ensuring that low-level policies consistently accomplish tasks."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Related work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "1.1.1",
19
+ "parent_section_id": "1.1",
20
+ "section_name": "I-A1 Hierarchical reinforcement learning",
21
+ "text": "In the field of HRL, identifying meaningful subgoals within long-horizon tasks has been extensive research. This includes studies on options [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###], goals [4 ###reference_b4###, 7 ###reference_b7###, 22 ###reference_b22###] and skills [14 ###reference_b14###, 15 ###reference_b15###, 23 ###reference_b23###, 24 ###reference_b24###]. Manual-designed subgoals are costly and challenging for complex tasks [16 ###reference_b16###]. Automatic learning of meaningful subgoals without any guidance from an external expert is a significant challenge [25 ###reference_b25###]. CSD [15 ###reference_b15###] aims to discover latent skills via mutual-information maximization. However, combining meaningful skills into task completion is a continuous challenge. Director [26 ###reference_b26###] introduces a practical method for learning hierarchical behaviors directly from pixels within a learned world model. [27 ###reference_b27###] proposes a skill-based hierarchical reinforcement learning (SHRL) framework for solving the problem of visual navigation of a target. The SHRL framework consists of a high-level policy and three low-level skills: search, adjustment, and exploration. [28 ###reference_b28###] introduces a novel framework called HIGL, which effectively reduces the action space of high-level policies by sampling informative landmarks for exploration and training the high-level policy to generate subgoals towards selected landmarks. HAC [7 ###reference_b7###] addresses the challenges of sparse reward and non-stationarity at high-levels through the utilization of hindsight action transitions and hindsight goal transitions. HIRO [8 ###reference_b8###] employs a model to generate a new high-level action to rectify the high-level transitions. AGILE [29 ###reference_b29###] addresses the non-stationarity issue by using adversarial learning to guide the generation of compatible subgoals for the low-level policy. Previous work has centered on resolving non-stationary issues or on task decomposition strategies, yet often faces the challenge of sparse rewards, prompting unnecessary exploration. Our study introduces human guidance and DDC to decompose tasks effectively. Human guidance directs subgoals, reducing reward sparsity, while DDC controls subgoal difficulty, synchronizes learning across levels, and relieves non-stationarity issues."
22
+ },
23
+ {
24
+ "section_id": "1.1.2",
25
+ "parent_section_id": "1.1",
26
+ "section_name": "I-A2 Reinforcement Learning from Human Feedback",
27
+ "text": "The surge in popularity of ChatGPT [30 ###reference_b30###, 31 ###reference_b31###] has significantly boosted the recognition of RLHF in recent times. RLHF is a technique for aligning the functionalities of Large Language Models (LLMs) with human ethical considerations and preference, achieved by integrating human feedback into the learning process [32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###]. For instance, [37 ###reference_b37###] demonstrates how human preferences can be integrated into text-to-image generation models, aligning them with human aesthetics. [38 ###reference_b38###] introduces a GAN-augmented reinforcement learning strategy that efficiently learns robot behaviors through human preferences, significantly reducing the reliance on human demonstrations. RLHF learns reward functions from pairwise comparison and ranking based on human preference [31 ###reference_b31###, 32 ###reference_b32###, 39 ###reference_b39###, 40 ###reference_b40###]. Human intuition and experience can be incorporated as guidance in high-level decision-making, particularly in setting subgoals within the HRL framework [18 ###reference_b18###]. Nevertheless, humans may struggle to offer immediate guidance that corresponds with the agent\u2019s capabilities."
28
+ },
29
+ {
30
+ "section_id": "1.2",
31
+ "parent_section_id": "1",
32
+ "section_name": "Our Contribution",
33
+ "text": "We introduce a novel framework, MENTOR, which integrates human guidance into the high-level policy learning process. This is achieved through RLHF, a method that trains a reward model by learning human preferences via binary comparisons of subgoals. MENTOR utilizes this reward model to generate subgoals at the high-level, effectively steering the agent towards optimal behavior. Additionally, we introduce DDC. It measures subgoal difficulty by distance and adjusts the subgoal space accordingly. To enable an agent to quickly complete subgoals at the low-level, we introduce Exploration-Exploitation Decoupling (EED), which uses one policy to explore while the other policy learns from the experience of the exploring policy to stabilize the training. We summarize the main contributions as follows:\nWe propose MENTOR, leveraging human feedback to guide the subgoal direction and Exploration-Exploitation Decoupling to simultaneously realize exploration and exploitation in subgoal attainment.\nWe introduce Dynamic Distance Constraint, dynamic aligning the subgoal difficulty to the capabilities of the low-level policy.\nWe demonstrate that MENTOR outperforms other baselines in accomplishing tasks with sparse rewards across various domains."
34
+ },
35
+ {
36
+ "section_id": "2",
37
+ "parent_section_id": null,
38
+ "section_name": "II Preliminary",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "2.1",
43
+ "parent_section_id": "2",
44
+ "section_name": "II-A Problem Setting",
45
+ "text": "Define Markov Decision Process with a set of goals, characterized by the tuple , where , , and are the sets of state, goal, and action, respectively, is the transition probability function, is the reward function, and is the discount rate . The objective is to find an optimal policy such that\nwhere denotes a trajectory generated under the policy starting from an initial state. The reward function can be defined as , which is a distance threshold determining contact."
46
+ },
47
+ {
48
+ "section_id": "2.2",
49
+ "parent_section_id": "2",
50
+ "section_name": "II-B Hindsight Relabelling",
51
+ "text": "Hindsight relabelling [4 ###reference_b4###] addresses sparse reward in GCRL by redefining failed transitions as successful transitions, thus generating positive rewards. Specifically, for a failed transition with , it resets the transition as with goal . By utilizing this new transition, it becomes possible to train an off-policy RL algorithm with more positive rewards. [4 ###reference_b4###] also delve three heuristic methods for goal relabeling to enhance learning efficiency: (1) Future Sampling, which selects goals from subsequent states within the same trajectory; (2) Episode-Specific Random Sampling, which randomly selects goals from the same episode without considering their order; (3) Random Sampling, which chooses new goals from the entire dataset. In our implementation, we have opted for the Future Sampling strategy."
52
+ },
53
+ {
54
+ "section_id": "2.3",
55
+ "parent_section_id": "2",
56
+ "section_name": "II-C Curiosity-driven Exploration",
57
+ "text": "Intrinsic motivation has been utilized to encourage agents to learn about their surroundings, even when extrinsic rewards are scarce [5 ###reference_b5###, 41 ###reference_b41###, 42 ###reference_b42###]. Curiosity-driven exploration is an intrinsic motivation strategy, exemplified by algorithms like RND, which fosters learning through the agent\u2019s desire to discover new information. In this framework, RND [5 ###reference_b5###] serves as the intrinsic reward. RND consists of two neural networks, represented as and , which are both randomly initialized and capable of transforming observations into embeddings. By fixing one network and training the other to minimize the prediction error, we follow the distillation optimization process:\nwhere is sampled from the replay buffer. Upon the agent\u2019s interaction with the environment, yielding the current state , we proceed to compute RND reward:\nThis RND reward encourages the agent to visit novel states, as it will be higher in regions of the state space that the predictor network finds difficult to approximate, thus fostering exploration in the learning process. To ensure effective policy optimization and stability in reinforcement learning, it is essential that the RND reward, as an intrinsic reward, is normalized to align with the scale of extrinsic rewards:\nWe determine the normalized intrinsic reward for state by converting the RND reward into a Z-score, aligning it with the mean and scaling it according to the standard deviation ."
58
+ },
59
+ {
60
+ "section_id": "3",
61
+ "parent_section_id": null,
62
+ "section_name": "III Methodology",
63
+ "text": "###figure_1### HRL consists of multiple levels, each focused on solving specific problems. The high-level is responsible for providing overall guidance, while the low-level handles task execution. This difference in responsibilities between the levels requires distinct reward function characteristics. Depending on unique features of diverse levels, we propose MENTOR, shown in Figure 1 ###reference_###(c). At the high-level, it utilizes RLHF and DDC to address the challenge of generating instructive subgoals shown in Figure 1 ###reference_###(a). At the low-level, it decouples exploration and exploitation, solving the instability associated with curiosity-driven exploration shown in Figure 1 ###reference_###(b).\nBefore introducing the framework, we briefly review here to introduce notation. The framework consists of four neural networks: high-level policy , low-level policy , reward model learned by human feedback , RND model and distance model with parameters , , , and respectively. The distance model, denoted as , operates within the Dynamic Distance Constraint. In a given episode, initially proposes subgoal based on current state and environment goal , followed by task given by to achieve . If succeeds and the episode remains active, subsequently issues a new subgoal . Thus, the high-level trajectory is represented as (), where signifies the moment of the -th subgoal generation by . Concurrently, the low-level trajectory unfolds as (), denotes the trajectory length."
64
+ },
65
+ {
66
+ "section_id": "3.1",
67
+ "parent_section_id": "3",
68
+ "section_name": "III-A RLHF and Dynamic Distance Constraint in High-level",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "3.1.1",
73
+ "parent_section_id": "3.1",
74
+ "section_name": "III-A1 Subgoal generation using RLHF",
75
+ "text": "Subgoal generation poses a significant challenge, with manual setup being costly and difficult, while automatic methods demand extensive computational resources to explore the state space. High-level subgoal generation requires macro, abstract, and generalized guidance, closely related to human preferences. RLHF offers human preferences through sample comparisons. Despite the inherent uncertainty and noise in the human feedback acquired through pairwise preference comparisons, this approach for reward model learning effectively guides the high-level policy learning because HRL mitigates the issue of uncertainty and noisy preferences in RLHF, creating a mutual benefit for both HRL and RLHF.\nRLHF uses pairwise comparison to train a reward model which can be used as the reward function for the high-level policy. The training process consists of (1) extracting pairs and from the high-level replay buffer. (2) Human annotators provide pairwise feedback which 0 prefers , prefers and for same preference. Preference can be assessed by the distance to the environment goal . (3) After collecting the feedback dataset into the reward model buffer , we can train the reward model by optimizing a modified Bradley-Terry objective [43 ###reference_b43###]. We define the possibility that human prefers to :\nThen, we define two probabilities:\nand the modified Bradley-Terry objective is as follows:\nIn optimizing the high-level policy, the high-level reward function is set to be . In the dynamic process of execution, humans and algorithms engage in ongoing interactions, where high-level policies can be subject to real-time guidance from human operators."
76
+ },
77
+ {
78
+ "section_id": "3.1.2",
79
+ "parent_section_id": "3.1",
80
+ "section_name": "III-A2 Dynamic Distance Constraint for Subgoal Difficulty Adjustment",
81
+ "text": "while humans can provide direction for high-level goals by expressing preferences, they may struggle to define subgoals that align with the low-level policy\u2019s capabilities. It is quite possible that influenced by their cognition, humans might prefer subgoals that are close to the ultimate goal. Consequently, if the reward model learned by human preferences is designed to assign the highest rewards when the subgoal coincides with the goal, it could lead to a scenario where the high-level policy, without considering the low-level\u2019s capacity and basing its decisions solely on human feedback, generates subgoals that rapidly converge towards the goal. This approach could render the subgoals excessively challenging and diminish their instructional value. Thus multi-level simultaneous learning may be uncoordinated.\nTo solve this issue above, we introduce DDC, whose function is illustrated in Figure 2 ###reference_###. This method limits the range of subgoals based on their distance from the current state (the variation in green shading depicted in the figure). Under the function of DDC, the learning process of our framework MENTOR is similar to curriculum learning[44 ###reference_b44###, 45 ###reference_b45###, 46 ###reference_b46###]. Curriculum learning facilitates a sequential learning process where simpler subgoals are mastered before progressing to more complex ones, thus laying a solid foundation for advanced subgoal acquisition. This is achieved through a specific formulation:\nwhere represents the distance between the subgoal and the achieved goal . The parameter sets the subgoal space range. The constraint ensures that the distance between the high-level subgoal and the current achieved goal remains within a range of lengths, and we can adjust to progressively reduce the difficulty. However, in scenarios like the Four Rooms domain, the Euclidean distance as may not accurately assess the difficulty of completing a task. For instance, a goal in the top right might be hard to reach despite a low Euclidean distance, indicating the need for a learned step distance measure. DDL [47 ###reference_b47###] offers a methodology for training distance models by randomly sampling state pairs from trajectories, to approximate the number of steps between states. Nevertheless, it\u2019s crucial to evaluate the step distance between unreachable subgoals and the current state. If the low-level policy fails to reach a subgoal, assigning a high distance value to these unreached subgoals relative to the current state is essential. Neglecting to do so, high-level policies might incorrectly view challenging subgoals as easy, leading to unrealistic subgoal proposals. To tackle this issue, we recommend incorporating extra samples containing unreached subgoals into the distance model objective:\nThe objective is formulated by minimizing the expected loss across trajectories , sampled using the low-level policy from recent episodes, and the goals drawn from the environment\u2019s distribution . is the length of the trajectory.\nIn Equation 6 ###reference_###, optimizing is challenging because of the strict constraint. To overcome this difficulty, we utilize the penalty function method [48 ###reference_b48###, 49 ###reference_b49###], which allows us to establish an unconstrained optimization objective:\nwhere is a balancing coefficient to adjust the influence of reward from human guidance and distance constraint. serves as a penalty function that imposes a cost when the subgoal deviates from the achievable range defined by the parameter . This parameter acts as a threshold, defining the maximum allowable distance for a subgoal to be attainable. This mechanism ensures that the subgoals chosen by the high-level policy are both in harmony with human guidance and within the operational capacity of the low-level policy.\nAs the capabilities of the low-level improve, it becomes necessary to dynamically adjust the parameter to ensure that the difficulty of the subgoals remains appropriately challenging. The adjustments in lead to different optimization goals for the model, which can cause several issues: (1) The coefficient needs to strike a balance between being large enough to ensure constraints are met and small enough to avoid excessive punishment. If the penalty coefficient is set too high, it may overly restrict the solution search space, causing the algorithm to miss the optimal solution or fail to converge. Conversely, if the penalty coefficient is too low, it may not effectively enforce the constraints, leading to solutions that do not satisfy the actual problem constraints. Therefore, a static coefficient may not preserve the optimal balance between human guidance and constraints when varies. (2) Different values of lead to distinct distributions of subgoals. Sampling data pairs solely from the comprehensive offline dataset does not ensure that the reward model accurately assigns rewards to subgoals under the current . (3) Assuming policy convergence based on a specific , it\u2019s challenging to quickly re-optimize the high-level policy when switching to a new . In response to these challenges, we have implemented three designs to ensure stability in policy updates when there are changes in .\nAutomatic balancing coefficient. We implement a dynamic balancing coefficient which will be updated in real-time to maintain a balance between the rewards and distance constraint.\nTo effectively incorporate the adjustment into MENTOR, we need to optimize two parameters simultaneously: high-level policy and balancing coefficient , converting our maximization into a dual problem:\nThe distance constraint function guarantees that the subgoals within a distance . This function treats subgoals that are within the distance as having zero effects. However, in the case of , we want it to decrease when this constraint does not work, and increase when the high-level policies make decisions that do not satisfy this constraint. Since is always greater than 0 and the update gradient for is singular, we need to eliminate the function from to achieve automatic updates for . We update the policy firstly and then coefficient following modification for the optimization:\nNear-policy sampling. The probability distribution of the data varies depending on the sampling policy and the different values of , even though all the data exist in an offline experience pool. It is inefficient to train the reward model using all off-policy data. Once the low-level policy has become proficient at achieving subgoals, it becomes redundant to use these subgoals for the training of the reward model, which is intended to enhance the current policy. To address this issue, we propose a new method that involves training the reward model with near-policy data, using pairs that are sampled from recent episodes. This approach allows the reward model to focus on the data that is most relevant to the current policy, enabling a more accurate evaluation of the current policy\u2019s performance. Moreover, training a reward model that can accurately evaluate the behavior of the current policy requires fewer samples and training iterations, which results in reduced computational consumption.\nNear-policy sampling can also be efficiently executed by merely preserving data from the newest episodes or their respective indices within the experience pool, and subsequently, randomly extracting from these recent datasets when gathering the comparison samples for the reward function training.\nHigh-level exploratory mechanism. As depicted in Figure 2 ###reference_###, the high-level policy may encounter a local optimum after proposing a subgoal in the lower-left region (left panel) and the low-level policy has adapted to achieve it, particularly upon adjusting the parameter. This policy is guided by a reward model trained on data from the replay buffer and human feedback, lacking initial pre-training. With incremental values, the high-level policy, potentially already converged, may limit its exploration to a less diverse dataset. This restricted exploration could prevent the identification and storage of superior subgoals in the replay buffer, thereby impeding the enhancement of the reward model and leading to policy training stagnation. To counteract this, it is crucial to promote high-level exploration. We have integrated exploration techniques such as RND and Maximum Entropy (MaxEnt), as detailed in [50 ###reference_b50###]. Specifically, we have adopted the MaxEnt approach, defining the high-level reward as , where is a small constant."
82
+ },
83
+ {
84
+ "section_id": "3.2",
85
+ "parent_section_id": "3",
86
+ "section_name": "III-B Exploration-Exploitation Decoupling in Low-level Policy",
87
+ "text": "Although the high-level can provide easier goals for the low-level through RLHF, these goals also serve as guidance for the low-level to move forward. However, sparse rewards hinder the low-level ability to quickly achieve subgoals. RLHF is well-suited for the high-level in HRL, but it is not applicable to the low-level. If the low-level policy incorporated RLHF, the noise and uncertainty in reward model could hinder the completion of the task. Existing technologies may use hindsight relabeling, but before the low-level policy explores the subgoals, this technology cannot guarantee that the agent learns how to complete the subtasks. Due to the limited exploration ability of the low-level policy, the agent may fall into local optima and repeatedly explore meaningless areas. Introducing RND can mitigate repeated exploration by promoting novel discoveries, though its direct implementation may result in instability. RND\u2019s exploration incentives might lead to neglecting the task of subgoal completion.\nTo address the issue, we propose EED, a dual-policy program, consisting of an exploration policy and a base policy , both sharing the same data buffer. During interactions with the environment, we employ the exploration policy and store the experiences in the shared replay buffer. For policy updates, both the exploration and base policies undergo a hindsight relabelling process for relabeled transition data.\nSubsequently, the exploratory policy is refined by optimizing the following objective function:\nThis formulation incorporates an additional element, , which introduces curiosity into the reward structure, thereby fostering exploration. Conversely, the base policy is updated by optimizing the following objective function:\nThis approach enables the base policy to assimilate novel insights from the exploratory data without the inherent burden of the exploration process itself, thus enabling it to focus on the attainment of the defined subgoals.\n###figure_2###"
88
+ },
89
+ {
90
+ "section_id": "3.3",
91
+ "parent_section_id": "3",
92
+ "section_name": "III-C MENTOR Process",
93
+ "text": "Overall, this paper introduces an innovative HRL algorithm, which proficiently identifies an approach for configuring the reward function, RLHF for high-level and EED for low-level. Furthermore, the paper addresses the challenge of inter-layer coordination in HRL by proposing a novel optimization with dynamic distance constraint. In detail, our framework MENTOR works as the following pseudo-code in Algorithm 1 ###reference_###. During the interaction with the environment (from line 4 to line 17), a high-level policy is responsible for selecting a subgoal (line 6). Once the subgoal is determined, a low-level exploration policy, denoted as , is utilized to execute actions until the subgoal is successfully achieved. This process of selecting subgoals and executing actions continues until the end of the episode. The data obtained from the interaction with the environment, both at the high-level and low-level, is stored in two replay buffers known as and . The model update process is implemented in lines 19 to 23. The high-level policy in Algorithm 3 ###reference_### updates and alternately. As for the low-level policy, it involves updating the RND model, applying hindsight, updating the low-level base policy , adding the RND bonus in the batch data, and updating the exploration policy . From lines 21 to 22, preference tuple pairs are sampled from the data of the last few episodes in , and human labels are obtained and stored in . These batches are then used to update and rewrite the reward data in the high-level buffer . Finally, distance training data is sampled from the data of the last few episodes in and used to update the distance model . From lines 25 to 34, it tests the low-level base policy success rate on the subgoal given by high-level policy and adjusts the parameters .\n###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8###"
94
+ },
95
+ {
96
+ "section_id": "4",
97
+ "parent_section_id": null,
98
+ "section_name": "IV EXPERIMENTS",
99
+ "text": "In this section, we perform a range of experiments in various commonly used domains [51 ###reference_b51###, 52 ###reference_b52###, 53 ###reference_b53###], as depicted in Figure 3 ###reference_###. Through the experiments, we aim to answer the following research questions (RQs):\nHow does the performance of MENTOR in comparison to baseline models in various domains?\nWhat is the impact of DDC in MENTOR?\nWhat is the impact of human feedback in MENTOR?\nWhat insights can be gained about the individual contributions of key components in MENTOR?"
100
+ },
101
+ {
102
+ "section_id": "4.1",
103
+ "parent_section_id": "4",
104
+ "section_name": "IV-A Setup",
105
+ "text": "Benchmarks: We select FetchPush, FetchPickAndPlace, FetchDraw, FetchObsPush, Pusher, and Four rooms as our simulation benchmarks, widely used in research [4 ###reference_b4###, 18 ###reference_b18###, 53 ###reference_b53###]. As illustrated in Figure 3 ###reference_###, the first five domains involve long-horizon manipulation tasks, while Four rooms focuses on 2D navigation. In our experiments utilizing the RLHF, we employed two distinct approaches to acquire human labels: manual labeling by humans and synthetic labeling through scripts. The specifics of these label acquisition processes are delineated in the Appendix B ###reference_.SSS0.Px1###. Across the experimental trials, we primarily utilized synthetic labels, with the exceptions explicitly noted.\nHardware: The computer\u2019s hardware is equipped with an AMD Ryzen 7 5700X processor, 32GB of RAM, and an NVIDIA GeForce RTX 3070 Ti graphics card.\nBaselines: We have incorporated various learning methods in our baseline implementation, including techniques from HRL, RLHF, RND, dynamic distance learning, and hindsight relabelling.\nReinforcement Learning from Human Feedback (RLHF) firstly establishes a reward model based on human feedback and then using it to optimize policy in reinforcement learning. PEBBLE is a RLHF framework that that combines unsupervised pre-training and preference-based learning to significantly improve the sample and feedback efficiency of human-in-the-loop reinforcement learning [32 ###reference_b32###]. The key concept of PEBBLE is to learn a reward model on human preference. Unlike the learning strategy of the reward model in the MENTOR, it has an unsupervised pretraining step via intrinsic reward. Here, we apply the state entropy . By converting the policy to a simpler version, the pretraining reward function is set as in the batch. This implies that maximizing the distance between a state and its nearest neighbor increases the overall state entropy.\nHindsight Relabeling is a data augmentation method for multi-task reinforcement learning that facilitates data sharing across tasks by relabeling experiences to improve sample efficiency. Our baseline, HER [4 ###reference_b4###] is a classical hindsight relabeling strategy. HER enables efficient learning from limited and non-diverse rewards by re-framing failed attempts as successes towards different goals. In our implementation, we utilize a policy to incorporate HER. When sampling batch transitions from the replay buffer, we employ hindsight technology to modify certain parts of the transitions. It is important to mention that the high-level policy differs from the low-level policy of MENTOR in certain aspects. Unlike the low-level policy, the high-level policy solely receives goals from the environment and does not incorporate an RND bonus.\nHierarchical Reinforcement Learning with Human Feedback (HRL-HF) integrates human feedback into the generation of high-level subgoals. We follow the architecture of HhP [18 ###reference_b18###]. HhP introduces an HRL method that integrates human preferences and curiosity-driven exploration. By using human preferences to guide decision-making at high-levels and curiosity to promote environmental exploration at low-levels. HhP is considered to be inefficient in combining HRL and RLHF, and it also introduces bias at the low-levels. When we trained HhP, we found that this algorithm was difficult to converge, and to ensure the effectiveness of training, we introduced the hindsight relabelling technique in this algorithm as well.\nDistance Learning (DL) learns distance models. The model is utilized to train goal achievement by setting the reward as the negative distance between states and goals. This baseline follows DDL [47 ###reference_b47###]. DDL calculates dynamical distances, defining them as the expected number of steps between states in a system. This method leverages unsupervised interactions for learning distances. In the distance evaluation step, the aim is to estimate the dynamic distance between pairs of states visited by a given policy. A distance function, denoted as , is utilized for this purpose. To calculate the distance, multiple trajectories, denoted as , are generated by rolling out the policy. Each trajectory has a length of T. The empirical distance between states and in the trajectory is then calculated, where . The empirical distance is simply given by the difference between j and i. To learn the distance function, supervised regression can be employed by minimizing the objective:\n.\n###figure_9###"
106
+ },
107
+ {
108
+ "section_id": "4.2",
109
+ "parent_section_id": "4",
110
+ "section_name": "IV-B Performance Evaluation (RQ1)",
111
+ "text": "In our experiments across six domains, we find that MENTOR excels in learning speed and subgoal attainment as evidenced in Figure 4 ###reference_###. This assessment uses five random seeds and is evaluated over 50 tests. As can be seen from DDL and PEBBLE curves. DDL and PEBBLE rarely learn effectively in complex GCRL environments. DDL\u2019s poor performance may be attributed to the absence of guiding signals and the instability of the reward function. Pairwise comparison guidance makes it difficult for PEBBLE to complete subgoal. HER, a classic GCRL algorithm, serves as a benchmark for evaluating other algorithms. Yet, HhP, despite integrating human guidance and curiosity-driven exploration, underperforms compared to HER in FetchPush. Its benefits are also unclear in Pusher and Four Rooms. This indicates HhP\u2019s inadequate use of human guidance and curiosity in exploration. In contrast, MENTOR, by incorporating Dynamic Distance Constraint and Exploration-Exploitation Decoupling, effectively leverages these elements for utilization of human feedback and exploration, achieving faster and more consistent training than DDL, PEBBLE, and HER. Our analysis highlights the superior performance of MENTOR.\nHowever, MENTOR demonstrates superior performance and is equipped with a greater number of components. To investigate the resource consumption of our algorithm, we conducted additional experiments, meticulously recording the model parameter count and computation time for 1000 iterations of MENOTR and HER in the FetchPush environment. As detailed in Table I ###reference_###, our results reveal that MENTOR has approximately 263% more parameters and requires roughly 176% more running time compared to HER. Although our model has a larger number of components, which significantly increases the parameter count, considering that modern GPUs typically have 8GB or more of memory, these models do not demand excessively high computational resources. Moreover, by examining the convergence of HER in Figure 4 ###reference_### and the computational time consumed by both MENTOR and HER algorithms over 1000 episodes, we found that our algorithm consumes 76% more time. However, the convergence speed of our algorithm is more than 3 times that of HER. Therefore, our algorithm remains highly efficient in terms of resource consumption."
112
+ },
113
+ {
114
+ "section_id": "4.3",
115
+ "parent_section_id": "4",
116
+ "section_name": "IV-C Impact of DDC (RQ2)",
117
+ "text": "Our study will examine DDC\u2019s effects on learning agents, assessing its impact on agent behaviors and the efficacy of three designs in achieving stable algorithmic convergence. We also explore DDC\u2019s synergism with human feedback.\n###figure_10### ###figure_11### Examining the correlation between task completion and DDC. As illustrated in Figure 5 ###reference_###, when comparing the black and green curves on the lower side, It is evident that DDC can regulate the difficulty of subgoals provided by limiting the distance. Without DDC, the high-level policy proposes subgoals at random difficulty. By examining this phenomenon in conjunction with the success rate of the subgoals, we can draw the following conclusions: (1) during the initial stages of the training process, the low-level policy can rapidly acquire the ability to achieve easy subgoals. (2) Once the low-level policy has successfully mastered a subgoal of a certain difficulty level, it can efficiently progress to learning subgoals of slightly higher difficulty. When there is no DDC, subgoals of randomized difficulty lead to slower learning of the low-level policy. It is concluded that the DDC can efficiently coordinate high-level and low-level.\n###figure_12### Analyzing the impact of automatic balancing coefficient. DDC has a mechanism to automatically adjust the balancing coefficient. The mechanism plays a crucial role in balancing the impact of the reward model and distance constraint. As shown in Figure 6 ###reference_###, fixed values of 0.5, 5, and 20 for are ineffective in learning compared to the automatic adjustment setting. Small values (0.5 and 5) slow DDC convergence. Large (20) enhances learning speed but makes it unstable and sensitive to .\nTo gain more insight into the details of difficulty adjustment under different , we recorded the change curve of ( of 0.05) under different settings in Figure 7 ###reference_###. Analyzing the value change curves for different values reveals how subgoal difficulty adapts throughout training. These curves track the incremental changes in the value ( = 0.05) and highlight the influence of the parameter on the stability and progression of the learning trajectory. Lower values lead to a more gradual increase in the value, promoting a stable but slower learning process. In contrast, higher values cause more pronounced fluctuations, which may accelerate learning but also introduce greater risk of instability. The Auto- setting demonstrates both rapid and stable value growth, indicating an adaptive strategy that balances speed and stability in the learning process. These value curves also serve as indicators of the low-level policy\u2019s progress: consistent increases suggest ongoing improvement, while plateaus or declines may signal the need to reassess the strategy. These analyses highlight the crucial role of the parameter in shaping the learning process.\n###figure_13### ###figure_14### Analyzing the near-policy sampling for reward model training and the exploratory mechanism at the high-level. During the training process of MENTOR, we observe some phenomena that the high-level policy cannot be well guided by the rewarding model. These observations can be ascribed to two factors: (1) the reward model\u2019s inadequate guidance of the current model, and (2) the increase in the value leading to the agents in local optima, thereby diminishing further exploration of potential subgoals. In Figure 8 ###reference_###, the reward heatmap, shaped by human feedback, approximates the oracle function. We observe that when near-policy samples are absent, the reward model tends to emphasize the entirety of the state space. This, in turn, makes it more challenging and computing resource-consuming for the reward model to effectively learn.\nAdditionally, we have observed that there are instances where the high-level policy becomes stuck in local optima as we incrementally increase the value of for a long time. This phenomenon is attributed to two reasons: (1) it lacks pretraining at the high-level, and (2) the high-level lacks the exploratory ability, leading to not trying new subgoals. Pretraining plays a crucial role in RLHF, yet it can be inefficient at the high-level due to limited samples (often only one per episode). In the operation of the high-levels, we find it necessary to increase the exploratory ability. As shown in Figure 8 ###reference_###, reward learning becomes challenging without MaxEnt (maximum entropy). We compare RND and MaxEnt in terms of exploratory at high-levels. After conducting experiments displayed in Figure 10 ###reference_###, we discovered that MaxEnt outperforms RND. While RND promotes the exploration of novel states, MaxEnt focuses on improving action entropy. The combination of the reward model and DDC can lead to previously disregarded decisions being recognized as good decisions. RND can learn and incorporate these decisions previously, treating them as non-novel at late times. However, MaxEnt does not encounter this issue and as a result, it achieves significantly better performance compared to RND.\n\n###figure_15### w/ DDC\n\n###figure_16### w/o DDC\n###figure_17### Analyzing the overlay effects of DDC and human feedback. Having investigated DDC\u2019s impact, we now turn to assess the combined influence of human guidance and DDC on subgoal formulation. Figure 9 ###reference_### depicts the training phase subgoal distribution, both with and without DDC. To the right of the figure, DDC\u2019s inactivity results in a fragmented learning interaction between levels. While the high-level, aided by human guidance, swiftly navigates to the goal, it neglects the low-level\u2019s execution capabilities, leading to transient and inefficient guidance, evident in the predominance of red subgoals in most regions except the upper-right corner. In contrast, on the figure\u2019s left, with DDC active, the high-level delineates a subgoal sequence starting from the lower right, proceeding to the lower left, then the upper left, and culminating at the upper right corner. This strategy, different from the scenario without DDC, significantly bolsters the efficiency of human guidance, contributing to MENTOR overall effectiveness.\n###figure_18### We also analyze the heatmap of the reward and distance model. The analysis of the heatmaps in Figure 11 ###reference_### shows complex interactions between the distance and reward models in the MENTOR framework. The distance model assesses subgoal difficulty. This overlap, calculated with , identifies areas optimizing rewards in alignment with the low-level policy\u2019s capabilities. However, without DDC, the human feedback-based reward model is limited, indicating only high-reward areas without guiding the agent on how to reach them. This can also corroborate the subgoal distribution in Figure 9 ###reference_###."
118
+ },
119
+ {
120
+ "section_id": "4.4",
121
+ "parent_section_id": "4",
122
+ "section_name": "IV-D Impact of Human Feedback (RQ3)",
123
+ "text": "This part is dedicated to examining the effects of human feedback on Model MENTOR. Firstly, we evaluate the influence of the frequency and quantity of human feedback on algorithmic performance. Following this, we examine the differences between real human labels and synthetic labels, explain the rationale for using synthetic labels in our experiments, and demonstrate the practical applicability of our algorithm to real-world contexts.\nAnalyzing the quantity and frequency of feedback. In Table II ###reference_###, the data shows the number of training episodes needed for 100 task success across different query frequencies and batch sizes. The table illustrates that integrating human feedback into training markedly improves learning speed, as highlighted by the contrast between label-free experiments and those with feedback. Further analysis demonstrates a consistent pattern: higher feedback frequency and quantity correlate with increased success rates. When considering the total labels and their impact on learning speed, our algorithm significantly enhances efficiency and stability by requiring only a small number of labels: 10 per 100 episodes, totaling 180, compared to experiments with non-feedback.\n###figure_19### ###figure_20### ###figure_21### ###figure_22### Comparing human collected labels and synthetic labels. In our study, we employ two methods for providing human guidance: synthetic labels generated through scripted techniques and human-generated labels.\nWe monitored the disagreement rates between authentic human labels and synthetic labels, as well as the accuracy of the reward models, as shown in Figure 12 ###reference_###. Given that the starting and ending positions in the Four rooms domain are predetermined, we provided 10 labels every 25 episodes. Conversely, in FetchPush where positions are not fixed, we provided 50 labels every 50 episodes. The data reveals discrepancies between human and synthetic labels, suggesting that human labeling is susceptible to errors due to subjective factors. Furthermore, the RLHF method produced a model whose accuracy converged to approximately 80%, indicating that even the trained model has its uncertainties and potential inaccuracies. This is also supported by the difference between the heatmap of the trained reward model and the heat map of the oracle reward model in Figure 11 ###reference_###. However, our algorithm demonstrates sufficient robustness to handle the inaccuracies inherent in RLHF. This implies that integrating RLHF into the high-level policy is a highly compatible approach; on the one hand, RLHF fulfills the reward requirements of the high-level, and on the other hand, the low-level policy remains unaffected by the inaccuracies of RLHF. During the algorithm testing, the low-level policy completes tasks independently, hence the final performance of the algorithm is not affected.\nWe compare the performance of human collected labels and synthetic labels in FetchPush and Four rooms domains. As illustrated in Figure 13 ###reference_###, models trained with human labels exhibited comparable learning rates, with performance differences potentially attributed to noise in human feedback. After comparing the experiments with human collected labels to the experiments with non-feedback, we can find a significant improvement in the algorithm\u2019s convergence speed and final performance. Therefore, we conclude that synthetic labels have a similar effect on improving the algorithm as real human labels. This implies that in other experiments, synthetic labels could potentially serve as a replacement for human collected labels.\n###figure_23### ###figure_24###"
124
+ },
125
+ {
126
+ "section_id": "4.5",
127
+ "parent_section_id": "4",
128
+ "section_name": "IV-E Ablation Studies (RQ4)",
129
+ "text": "The ablation study in the MENTOR framework, focusing on the FetchPickAndPlace and FetchPush domains, evaluates how components like HF (human feedback), DDC, and EED (Exploration-Exploitation Decoupling) contribute to learning efficiency and goal achievement. Figure 14 ###reference_### (top) assesses each module\u2019s effectiveness by systematically removing high-level modules and observing the impact on model performance. The removal of DDC and human feedback results in slower convergence, reduced performance, and decreased stability. The study mentioned in Figure 14 ###reference_### (bottom) discovered that the absence of EED negatively affects the balance between exploration and exploitation, resulting in lower success rates even after algorithm convergence. This emphasizes the significance and interdependence of these modules in improving learning for complex tasks in the MENTOR framework.\n###figure_25### ###figure_26### ###figure_27### ###figure_28###"
130
+ },
131
+ {
132
+ "section_id": "5",
133
+ "parent_section_id": null,
134
+ "section_name": "Conclusion",
135
+ "text": "This study presents MENTOR, an innovative method that combines human feedback and Dynamic Distance Constraint for learning guidance. It integrates high-level human insights for selecting subgoals concerning low-level capabilities and introduces exploration-exploitation decoupling at the low-level to improve training stability and efficiency. Our experiments demonstrate the framework\u2019s effectiveness in complex tasks with sparse rewards, outperforming existing baselines.\nWe recognize the complexity of human guidance beyond just subgoal selection and aim to explore a wider range of feedback integration to enhance learning dynamics. We plan to further expand the framework\u2019s applications and refine its mechanisms, aiming to advance hierarchical reinforcement learning and create more intuitive, adaptable learning systems."
136
+ }
137
+ ],
138
+ "appendix": [
139
+ {
140
+ "section_id": "Appendix 1",
141
+ "parent_section_id": null,
142
+ "section_name": "Appendix A Domains Details",
143
+ "text": "In this section, we will provide further elaboration on the benchmarks utilized for comparing our method with the baselines. Specifically, we will delve into the details of the observation space, action space, and the configuration of the reward function.\nFetchPush. In this domain, the aim is to use a 7-DoF Fetch Mobile Manipulator, equipped with a closed two-fingered parallel gripper, for transporting a block to a set position on a table. The robot\u2019s movement is finely adjusted using Cartesian coordinates, and the MuJoCo framework calculates the inverse kinematics. This task, which involves pushing the block with the gripper constantly closed, is ongoing, requiring the robot to steadily keep the block at the target position. The scenario is observed through a 25-element array, encompassing kinematic details of both the block and gripper. The action space is a Box(-1.0, 1.0, 4, float32), with each action altering the gripper\u2019s Cartesian position and managing its opening and closing. The reward system applies -1 when the block isn\u2019t at the target, and 0 when correctly positioned, defined as being within 0.05 meters of the target.\nFetchPickAndPlace. Utilizing the same robot setup with FetchPush, this domain focuses on moving a block to a defined point, including mid-air locations. It shares the same observation array and action space as FetchPush, with the addition of a goal-aware observation space. The reward system remains consistent with the FetchPush domain.\nFetchDraw. Utilizing the same 7-DoF Fetch Mobile Manipulator with a two-fingered parallel gripper, this task involves the robot\u2019s precise interaction with a drawer. The objective is two-fold: (1) to reach for the drawer handle, and (2) to slide the drawer to a target position by pulling or pushing the handle. The manipulation requires the gripper to perform open and close actions for a firm grasp on the drawer handle. The robot must successfully move and maintain the drawer at the target position indefinitely. The reward system remains consistent with the FetchPush domain.\nFetchObsPush. This task engages the same 7-DoF Fetch Mobile Manipulator, which is equipped with a two-fingered parallel gripper. The robot is tasked with manipulating a block to a desired position on a table in the presence of an obstacle. The challenge requires the robot to (1) approach and securely grasp the block, and (2) navigate and push the block to the target location, accounting for the obstacle whose size and shape are unknown. This task demands precision handling and adaptability to avoid obstacle while ensuring the block reaches its target position. As with the FetchPush, FetchObsPush is continuous, with the robot required to keep the block within 0.05 meters of the target position indefinitely. The reward system remains consistent with the FetchPush domain.\nPusher. This domain involves the manipulation of a robotic arm, specifically a sawyer robotic arm, in an environment with multiple obstacles. The objective is to successfully push an obstacle, referred to as a puck, to a designated goal area marked by a red dot. Unlike the FetchPush problem, which has a 25-dimensional observation, the state space in this environment is determined by the position of the puck and the arm. The action space, on the other hand, involves controlling the position of the robotic arm. It is a discrete 9-dimensional space where each action corresponds to a delta change in the position in a 2-D space. The reward system remains consistent with the FetchPush domain.\nFour rooms. In the four-room domain, the agent\u2019s task is to navigate to a specified goal while dealing with initially unknown obstacles. The agent is positioned at (0.4, -0.4) in the bottom right room, aiming for the goal at (0.25, 0.25) in the top right room. The state observation is the agent\u2019s precise (x, y) location, and it has a set of 9 possible actions to choose from, which allow movement in all cardinal and diagonal directions, or to remain stationary. Key to this task are the doorways in the walls, which are the sole means for the agent to traverse between rooms.\nIn the FetchPush, FetchPickAndPlace, FetchDraw, FetchObsPush, and Pusher domains, synthetic labels are generated using a dense sparse approach. This means that the reward returned is calculated as the negative Euclidean distance between the achieved goal position and the desired goal. In the Four rooms domain, we utilize the reward function displayed in Equation 14 ###reference_###. The reward heatmap, referred to as the oracle reward model, is depicted in Figure 11 ###reference_###. In the top right quadrant, the agent receives a reward based on the negative Euclidean distance from its current position to the goal . In the top left quadrant, the reward is the negative Euclidean distance from to a fixed point , with an additional penalty of -0.3. In the bottom left quadrant, the reward is the negative Euclidean distance from to , with a penalty of -0.6. In the bottom right quadrant, it is the negative Euclidean distance from to , with a penalty of -1."
144
+ },
145
+ {
146
+ "section_id": "Appendix 2",
147
+ "parent_section_id": null,
148
+ "section_name": "Appendix B Details of Implementation",
149
+ "text": ""
150
+ }
151
+ ],
152
+ "tables": {
153
+ "1": {
154
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span><span class=\"ltx_text\" id=\"S4.T1.4.1\" style=\"color:#000000;\">Model parameter count and time consumption for the first 1000 episodes on FetchPush domain. Here, \u2019M\u2019 denotes \u2019million\u2019 for the quantity of model parameters, and \u2019min\u2019 denotes \u2019minutes\u2019 for the duration of time spent.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.2.3.1.1\">Algorithm</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.2.3.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.3.1.2.1\">MENTOR</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.2.3.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.3.1.3.1\">HER</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.4.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.2.4.1.1\">low-level actor&amp;critic</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.4.1.2\">0.560M</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.4.1.3\">0.280M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.5.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.2.5.2.1\">high-level actor&amp;critic</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.5.2.2\">0.189M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.5.2.3\">0.189M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.6.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.2.6.3.1\">reward</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.6.3.2\">0.140M -</td>\n<td class=\"ltx_td\" id=\"S4.T1.2.6.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.7.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.2.7.4.1\">rnd</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.7.4.2\">0.278M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.7.4.3\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.8.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.2.8.5.1\">distance</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.8.5.2\">0.068M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.8.5.3\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.9.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.2.9.6.1\">total count</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.9.6.2\">1.235M</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.9.6.3\">0.469M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T1.2.2.3\">time consumption</th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.1.1.1\">18.63 0.05min</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.2.2.2\">10.76 0.08min</td>\n</tr>\n</tbody>\n</table>\n</figure>",
155
+ "capture": "TABLE I: Model parameter count and time consumption for the first 1000 episodes on FetchPush domain. Here, \u2019M\u2019 denotes \u2019million\u2019 for the quantity of model parameters, and \u2019min\u2019 denotes \u2019minutes\u2019 for the duration of time spent."
156
+ },
157
+ "2": {
158
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Cross-analysis on episodes (100 steps per episode) to the success of variables batch queries and query frequency. For each combination of variables, we ran 5 trials on different seeds and recorded the mean and variance.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.10\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.10.11.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.10.11.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"3\" id=\"S4.T2.10.11.1.2\">Query Frequency</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.12.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S4.T2.10.12.2.1\">Batch Queries</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.10.12.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.12.2.2.1\">25</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.10.12.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.12.2.3.1\">50</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T2.10.12.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.12.2.4.1\">100</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.1\">0</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1\">35221441</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.4\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.4.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.1\">10</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.1\">1565128</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.2\">1765268</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.3\">1820261</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.7.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.7.7.4.1\">25</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.1\">1455165</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.2\">1770108</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.7.3\">1683203</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T2.10.10.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.10.4.1\">50</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.8.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.8.1.1\">140655</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.9.9.2\">1735253</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.10.10.3\">1760277</td>\n</tr>\n</tbody>\n</table>\n</figure>",
159
+ "capture": "TABLE II: Cross-analysis on episodes (100 steps per episode) to the success of variables batch queries and query frequency. For each combination of variables, we ran 5 trials on different seeds and recorded the mean and variance."
160
+ },
161
+ "3": {
162
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Hyperparameters of MENTOR.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A2.T3.14\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T3.14.15.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A2.T3.14.15.1.1\">Hyperparameters</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"A2.T3.14.15.1.2\">Values</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.16.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A2.T3.14.16.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T3.14.16.2.1.1\">High-level policy</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_border_tt\" id=\"A2.T3.14.16.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A2.T3.1.1.2\">Actor learning rate</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"A2.T3.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.2.2.2\">Critic learning rate</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.2.2.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.3.3.2\">Replay buffer size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.3.3.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.17.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.17.3.1\">Hidden layers</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.17.3.2\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.18.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.18.4.1\">Hidden size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.18.4.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.19.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.19.5.1\">Batch size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.19.5.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.20.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.20.6.1\">Soft update rate</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.20.6.2\">0.005</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.21.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.21.7.1\">Policy update frequency</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.21.7.2\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.4.4.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.4.4.2\">0.95</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.5.5.2\">Distance model learning rate</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.5.5.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.22.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.22.8.1\">Distance model replay buffer size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.22.8.2\">1000</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.23.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.23.9.1\">Distance model hidden layers</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.23.9.2\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.24.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.24.10.1\">Distance model hidden size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.24.10.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.25.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.25.11.1\">Distance model batch size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.25.11.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.6.6.2\">Reward model learning rate</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.6.6.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.26.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.26.12.1\">Reward model replay buffer size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.26.12.2\">1000</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.27.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.27.13.1\">Reward model hidden layers</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.27.13.2\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.28.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.28.14.1\">Reward model hidden size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.28.14.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.29.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.29.15.1\">Reward model batch size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.29.15.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.30.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.30.16.1\">Query Frequency</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.30.16.2\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.31.17\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.31.17.1\">Batch Queries</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.31.17.2\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.32.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.32.18.1\">Success rate high threshold</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.32.18.2\">0.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.33.19\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.33.19.1\">Success rate low threshold</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.33.19.2\">0.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.7.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.7.7.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.7.7.2\">0.05</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.8.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.8.8.1\">Initial \n</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.8.8.2\">0.05</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.9.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.9.9.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.9.9.2\">0.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.34.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt ltx_border_t\" id=\"A2.T3.14.34.20.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T3.14.34.20.1.1\">Low-level policy</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_border_tt ltx_border_t\" id=\"A2.T3.14.34.20.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.10.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A2.T3.10.10.2\">Actor learning rate</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_t\" id=\"A2.T3.10.10.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.11.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.11.11.2\">Critic learning rate</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.11.11.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.35.21\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.35.21.1\">Hidden layers</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.35.21.2\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.36.22\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.36.22.1\">Hidden size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.36.22.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.12.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.12.12.2\">Replay buffer size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.12.12.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.37.23\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.37.23.1\">Batch size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.37.23.2\">512</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.38.24\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.38.24.1\">Soft update rate</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.38.24.2\">0.005</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.39.25\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.39.25.1\">Policy update frequency</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.39.25.2\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.13.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.13.13.1\"></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.13.13.2\">0.95</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.14.2\">RND learning rate</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.14.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.40.26\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.40.26.1\">RND hidden layers</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.40.26.2\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.41.27\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.41.27.1\">RND hidden size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.41.27.2\">256</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.42.28\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.42.28.1\">RND represent size</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.42.28.2\">512</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.43.29\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T3.14.43.29.1\">RND bonus scaling</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"A2.T3.14.43.29.2\">1.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.14.44.30\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"A2.T3.14.44.30.1\">Hindsight sample ratio</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_left ltx_border_b\" id=\"A2.T3.14.44.30.2\">0.8</td>\n</tr>\n</tbody>\n</table>\n</figure>",
163
+ "capture": "TABLE III: Hyperparameters of MENTOR."
164
+ }
165
+ },
166
+ "image_paths": {
167
+ "1": {
168
+ "figure_path": "2402.14244v2_figure_1.png",
169
+ "caption": "Figure 1: (a) The high-level policy selects subgoals with DDC (shades of green), and human guides by comparing these subgoals. (b) The low-level decouples exploration and exploitation through two policies, one policy explores the environment and the other learns from the experience of exploration. (c) Diagrammatic representation of MENTOR framework.",
170
+ "url": "http://arxiv.org/html/2402.14244v2/x1.png"
171
+ },
172
+ "2": {
173
+ "figure_path": "2402.14244v2_figure_2.png",
174
+ "caption": "Figure 2: As the low-level capability improves, the DDC progressively relaxes, allowing the high-level to propose increasingly challenging subgoals.",
175
+ "url": "http://arxiv.org/html/2402.14244v2/x2.png"
176
+ },
177
+ "3(a)": {
178
+ "figure_path": "2402.14244v2_figure_3(a).png",
179
+ "caption": "FetchPush\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.",
180
+ "url": "http://arxiv.org/html/2402.14244v2/x3.png"
181
+ },
182
+ "3(b)": {
183
+ "figure_path": "2402.14244v2_figure_3(b).png",
184
+ "caption": "FetchPickAndPlace\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.",
185
+ "url": "http://arxiv.org/html/2402.14244v2/x4.png"
186
+ },
187
+ "3(c)": {
188
+ "figure_path": "2402.14244v2_figure_3(c).png",
189
+ "caption": "FetchDraw\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.",
190
+ "url": "http://arxiv.org/html/2402.14244v2/x5.png"
191
+ },
192
+ "3(d)": {
193
+ "figure_path": "2402.14244v2_figure_3(d).png",
194
+ "caption": "FetchObsPush\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.",
195
+ "url": "http://arxiv.org/html/2402.14244v2/x6.png"
196
+ },
197
+ "3(e)": {
198
+ "figure_path": "2402.14244v2_figure_3(e).png",
199
+ "caption": "Pusher\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.",
200
+ "url": "http://arxiv.org/html/2402.14244v2/x7.png"
201
+ },
202
+ "3(f)": {
203
+ "figure_path": "2402.14244v2_figure_3(f).png",
204
+ "caption": "Four rooms\nFigure 3: Experimental Domains for Evaluating MENTOR compared with baselines. The Pusher and Four Rooms serve as representative domains within the class of Partially Observable Markov Decision Processes (POMDPs), characterized by uncertainties in the state observations which require the algorithm to make decisions based on incomplete information. In contrast, the rest domains, such as FetchObsPush, provide a fully observable state space. For further details, please refer to Appendix A.",
205
+ "url": "http://arxiv.org/html/2402.14244v2/x8.png"
206
+ },
207
+ "4": {
208
+ "figure_path": "2402.14244v2_figure_4.png",
209
+ "caption": "Figure 4: Graphical representation of the success rates for MENTOR in comparison to other baseline methods across different benchmarks on five random seeds. The shaded areas surrounding each curve represent the standard deviation. Within the Four Rooms domain, the performance curve exhibits non-smooth behavior due to the fixed positions of the starting point and the goal. Consequently, the success rate can abruptly transition from 0% to 100%, leading to the curve with large variance. Any curves that are not visible in the graph, indicate a zero success rate throughout the trials. These results are aggregated from an average of five individual runs.",
210
+ "url": "http://arxiv.org/html/2402.14244v2/x9.png"
211
+ },
212
+ "5(a)": {
213
+ "figure_path": "2402.14244v2_figure_5(a).png",
214
+ "caption": "Figure 5: Impacts of Distance Constraints on success rate in FetchPickAndPlace and FetchPush domains. Since the high-level policy requires data to be collected before updating, a segment is missing from the distance curve.",
215
+ "url": "http://arxiv.org/html/2402.14244v2/x10.png"
216
+ },
217
+ "5(b)": {
218
+ "figure_path": "2402.14244v2_figure_5(b).png",
219
+ "caption": "Figure 5: Impacts of Distance Constraints on success rate in FetchPickAndPlace and FetchPush domains. Since the high-level policy requires data to be collected before updating, a segment is missing from the distance curve.",
220
+ "url": "http://arxiv.org/html/2402.14244v2/x11.png"
221
+ },
222
+ "6": {
223
+ "figure_path": "2402.14244v2_figure_6.png",
224
+ "caption": "Figure 6: Effects of the balancing coefficient on the environment goal success rate in FetchPickAndPlace domains are examined on five random seeds. In the first three graphs, the dashed lines represent the average success rate with auto-set \u03b1\ud835\udefc\\alphaitalic_\u03b1 in the worst case, where \u0394\u2062k\u0394\ud835\udc58\\Delta kroman_\u0394 italic_k is 0.02. The adjustment value of k\ud835\udc58kitalic_k is represented as \u0394\u2062k\u0394\ud835\udc58\\Delta kroman_\u0394 italic_k. We modify the parameter k\ud835\udc58kitalic_k to increase on successful completion of the subgoal and decrease on failure.",
225
+ "url": "http://arxiv.org/html/2402.14244v2/x12.png"
226
+ },
227
+ "7": {
228
+ "figure_path": "2402.14244v2_figure_7.png",
229
+ "caption": "Figure 7: The variation curve of k\ud835\udc58kitalic_k value throughout the training process under different \u03b1\ud835\udefc\\alphaitalic_\u03b1 value and \u0394\u2062k\u0394\ud835\udc58\\Delta kroman_\u0394 italic_k is 0.05.",
230
+ "url": "http://arxiv.org/html/2402.14244v2/x13.png"
231
+ },
232
+ "8": {
233
+ "figure_path": "2402.14244v2_figure_8.png",
234
+ "caption": "Figure 8: Near-policy sampling and exploratory mechanism effects on reward model in the Four rooms domain. On the top, the oracle reward function in the Four rooms domain indicates where the agent likely earns higher rewards, guiding it from the lower right to the upper right corner. We use the oracle reward to generate synthetic labels to train the reward model following Equation 5. The heatmaps below the oracle reward reflect how the reward model adjusts in response to the agent\u2019s exploration and policy updates within the environment. The heatmaps of reward model for the top and bottom exhibit changes across 2,000 episodes, whereas the heatmap for the center demonstrates alterations throughout 10,000 episodes.",
235
+ "url": "http://arxiv.org/html/2402.14244v2/x14.png"
236
+ },
237
+ "9(a)": {
238
+ "figure_path": "2402.14244v2_figure_9(a).png",
239
+ "caption": "Figure 9: In Four rooms domain, we compare subgoal distributions with and without DDC during training. Subgoals are shown as colored circles, with a red-to-blue gradient for training time. The starting point is marked by a black circle in the lower right, while the ending point is a pentagram in the upper left.",
240
+ "url": "http://arxiv.org/html/2402.14244v2/x15.png"
241
+ },
242
+ "9(b)": {
243
+ "figure_path": "2402.14244v2_figure_9(b).png",
244
+ "caption": "Figure 9: In Four rooms domain, we compare subgoal distributions with and without DDC during training. Subgoals are shown as colored circles, with a red-to-blue gradient for training time. The starting point is marked by a black circle in the lower right, while the ending point is a pentagram in the upper left.",
245
+ "url": "http://arxiv.org/html/2402.14244v2/x16.png"
246
+ },
247
+ "10": {
248
+ "figure_path": "2402.14244v2_figure_10.png",
249
+ "caption": "Figure 10: Exploratory mechanism effects analysis in FetchPickAndPlace domain.",
250
+ "url": "http://arxiv.org/html/2402.14244v2/x17.png"
251
+ },
252
+ "11": {
253
+ "figure_path": "2402.14244v2_figure_11.png",
254
+ "caption": "Figure 11: DDC effects on reward model in the Four rooms domain. The heatmaps serve as visual representations of the models\u2019 learning progress, where the distance model learns to accurately estimate the state to initial state distances and the reward model learns to assign values that incentivize the agent\u2019s progression towards the final goal. The overlay reward heatmap captures the integrated effect of both the reward model and the distance model, as articulated in the adjustments of the high-level policy update detailed in Equation 9.",
255
+ "url": "http://arxiv.org/html/2402.14244v2/x18.png"
256
+ },
257
+ "12(a)": {
258
+ "figure_path": "2402.14244v2_figure_12(a).png",
259
+ "caption": "Figure 12: On the top side, it records the rate of disagreement between human annotations and synthetic labels. There lies a 10% to 20% disparity between human-labeled labels and synthetic labels. On the bottom side, we present the accuracy rate on the training data of the reward model. Within the Four room domain, the reward model\u2019s accuracy is 75%, and approximately 80% for FetchPush.",
260
+ "url": "http://arxiv.org/html/2402.14244v2/x19.png"
261
+ },
262
+ "12(b)": {
263
+ "figure_path": "2402.14244v2_figure_12(b).png",
264
+ "caption": "Figure 12: On the top side, it records the rate of disagreement between human annotations and synthetic labels. There lies a 10% to 20% disparity between human-labeled labels and synthetic labels. On the bottom side, we present the accuracy rate on the training data of the reward model. Within the Four room domain, the reward model\u2019s accuracy is 75%, and approximately 80% for FetchPush.",
265
+ "url": "http://arxiv.org/html/2402.14244v2/x20.png"
266
+ },
267
+ "12(c)": {
268
+ "figure_path": "2402.14244v2_figure_12(c).png",
269
+ "caption": "Figure 12: On the top side, it records the rate of disagreement between human annotations and synthetic labels. There lies a 10% to 20% disparity between human-labeled labels and synthetic labels. On the bottom side, we present the accuracy rate on the training data of the reward model. Within the Four room domain, the reward model\u2019s accuracy is 75%, and approximately 80% for FetchPush.",
270
+ "url": "http://arxiv.org/html/2402.14244v2/x21.png"
271
+ },
272
+ "12(d)": {
273
+ "figure_path": "2402.14244v2_figure_12(d).png",
274
+ "caption": "Figure 12: On the top side, it records the rate of disagreement between human annotations and synthetic labels. There lies a 10% to 20% disparity between human-labeled labels and synthetic labels. On the bottom side, we present the accuracy rate on the training data of the reward model. Within the Four room domain, the reward model\u2019s accuracy is 75%, and approximately 80% for FetchPush.",
275
+ "url": "http://arxiv.org/html/2402.14244v2/x22.png"
276
+ },
277
+ "13(a)": {
278
+ "figure_path": "2402.14244v2_figure_13(a).png",
279
+ "caption": "Figure 13: Experiments for evaluating MENTOR with script-generated and human-collected labels on the FetchPush and Four rooms domain. In these experiments, we provided 10 labels every 25 episodes. The results for the script-generated labels and non-feedback scenarios were based on five random seeds to ensure robustness, while the human-collected label experiment relied on a single random seed.",
280
+ "url": "http://arxiv.org/html/2402.14244v2/x23.png"
281
+ },
282
+ "13(b)": {
283
+ "figure_path": "2402.14244v2_figure_13(b).png",
284
+ "caption": "Figure 13: Experiments for evaluating MENTOR with script-generated and human-collected labels on the FetchPush and Four rooms domain. In these experiments, we provided 10 labels every 25 episodes. The results for the script-generated labels and non-feedback scenarios were based on five random seeds to ensure robustness, while the human-collected label experiment relied on a single random seed.",
285
+ "url": "http://arxiv.org/html/2402.14244v2/x24.png"
286
+ },
287
+ "14(a)": {
288
+ "figure_path": "2402.14244v2_figure_14(a).png",
289
+ "caption": "Figure 14: Ablations studies for MENTOR in FetchPickAndPlace and FetchPush domains at five random seeds. The ablation experiment involved immobilizing the low-level policy, removing the HF and DDC functions of the high-level policy, immobilizing the high-level policy, and removing the EED at the low-level.",
290
+ "url": "http://arxiv.org/html/2402.14244v2/x25.png"
291
+ },
292
+ "14(b)": {
293
+ "figure_path": "2402.14244v2_figure_14(b).png",
294
+ "caption": "Figure 14: Ablations studies for MENTOR in FetchPickAndPlace and FetchPush domains at five random seeds. The ablation experiment involved immobilizing the low-level policy, removing the HF and DDC functions of the high-level policy, immobilizing the high-level policy, and removing the EED at the low-level.",
295
+ "url": "http://arxiv.org/html/2402.14244v2/x26.png"
296
+ },
297
+ "14(c)": {
298
+ "figure_path": "2402.14244v2_figure_14(c).png",
299
+ "caption": "Figure 14: Ablations studies for MENTOR in FetchPickAndPlace and FetchPush domains at five random seeds. The ablation experiment involved immobilizing the low-level policy, removing the HF and DDC functions of the high-level policy, immobilizing the high-level policy, and removing the EED at the low-level.",
300
+ "url": "http://arxiv.org/html/2402.14244v2/x27.png"
301
+ },
302
+ "14(d)": {
303
+ "figure_path": "2402.14244v2_figure_14(d).png",
304
+ "caption": "Figure 14: Ablations studies for MENTOR in FetchPickAndPlace and FetchPush domains at five random seeds. The ablation experiment involved immobilizing the low-level policy, removing the HF and DDC functions of the high-level policy, immobilizing the high-level policy, and removing the EED at the low-level.",
305
+ "url": "http://arxiv.org/html/2402.14244v2/x28.png"
306
+ },
307
+ "15": {
308
+ "figure_path": "2402.14244v2_figure_15.png",
309
+ "caption": "Figure 15: A simple interface for human annotation.",
310
+ "url": "http://arxiv.org/html/2402.14244v2/x29.png"
311
+ }
312
+ },
313
+ "validation": true,
314
+ "references": [],
315
+ "url": "http://arxiv.org/html/2402.14244v2"
316
+ }
20241127/2402.14708v2.json ADDED
@@ -0,0 +1,509 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph Neural Networks",
3
+ "abstract": "Credit card fraud poses a significant threat to the economy. While Graph Neural Network (GNN)-based fraud detection methods perform well, they often overlook the causal effect of a node\u2019s local structure on predictions. This paper introduces a novel method for credit card fraud detection, the Causal Temporal Graph Neural Network (CaT-GNN), which leverages causal invariant learning to reveal inherent correlations within transaction data. By decomposing the problem into discovery and intervention phases, CaT-GNN identifies causal nodes within the transaction graph and applies a causal mixup strategy to enhance the model\u2019s robustness and interpretability. CaT-GNN consists of two key components: Causal-Inspector and Causal-Intervener. The Causal-Inspector utilizes attention weights in the temporal attention mechanism to identify causal and environment nodes without introducing additional parameters. Subsequently, the Causal-Intervener performs a causal mixup enhancement on environment nodes based on the set of nodes. Evaluated on three datasets, including a private financial dataset and two public datasets, CaT-GNN demonstrates superior performance over existing state-of-the-art methods. Our findings highlight the potential of integrating causal reasoning with graph neural networks to improve fraud detection capabilities in financial transactions.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The substantial damages wrought by financial fraud continue to garner ongoing focus from academic circles, the business sector, and regulatory bodies Jiang et al. (2016 ###reference_b18###); Aleksiejuk and Ho\u0142yst (2001 ###reference_b2###). Fraudsters masquerade as ordinary users and attack transactions made with credit cards Ileberi et al. (2022 ###reference_b17###), which may inflict substantial economic losses and pose a severe threat to sustainable economic growth AlFalahi and Nobanee (2019 ###reference_b3###). Consequently, effective detection of financial fraud is imperative for safeguarding the economy and consumer security.\nIn the financial deception realm, identifying credit card fraud has garnered considerable attention among both industry and academia Bhattacharyya et al. (2011 ###reference_b4###). Traditional approaches to detecting fraudulent activities typically entail meticulous examination of each transaction for irregularities, employing predefined criteria such as verification against lists of compromised cards or adherence to established transaction thresholds Maes et al. (2002 ###reference_b28###); Fu et al. (2016 ###reference_b12###). However, the aforementioned anti-fraud systems, based on expert prior and rules, are often susceptible to exploitation by fraudsters, who can circumvent detection by crafting ingenious transaction methods that elude the system\u2019s scrutiny of illicit activities. Toward this end, predictive modeling has been introduced, aiming to autonomously identify patterns that suggest fraudulent activity and calculate a corresponding risk score.\n###figure_1### Currently, state-of-the-art predictive models are focused on using deep learning methods, capturing potential illegal patterns in a data-driven manner Fu et al. (2016 ###reference_b12###); Dou et al. (2020 ###reference_b9###). For instance, Liu et al. (2021 ###reference_b26###) introduces PC-GNN, a Graph Neural Network approach that effectively handles class imbalance in graph-based fraud detection by selectively sampling nodes and edges, particularly focusing on the minority class. Moreover, Xiang et al. (2023 ###reference_b42###) leverages transaction records to construct a temporal transaction graph, applying a Gated Temporal Attention Network to effectively learn transaction representations and model fraud patterns. Unfortunately, i) these methods often overlook the intrinsic patterns and connections within the data due to a lack of consideration for local structure consistency; ii) they lack the ability to uncover the causal nature of each specific case, which leads to inadequate differentiation between the attributes of causal nodes and environment nodes, thereby impairing the model\u2019s generalization capabilities; iii) they lack interpretability in making specific predictions.\nIn this paper, we introduce a novel Causal Temporal Graph Neural Network, termed CaT-GNN, aiming at providing an interpretable paradigm for credit card fraud detection. Guided by the currently popular causal invariant learning techniques Chang et al. (2020 ###reference_b5###); Liu et al. (2022 ###reference_b27###), CaT-GNN\u2019s primary objective is to unveil the inherent correlations in the transaction attribute data of nodes within available temporal transaction graphs, thereby offering interpretability for complex transaction fraud problems.\nTo unravel causal correlations, specifically, we decompose the algorithmic process of CAT-GNN into two stages - discovery and intervention. The goal of the discovery stage is to identify potential causal components within observed temporal graph data, where we introduce a causal temporal graph neural network for modeling. Utilizing the popular node-attention metrics Veli\u010dkovi\u0107 et al. (2017 ###reference_b36###); Xiang et al. (2023 ###reference_b42###), we employ attention score to locate key nodes, designated as causal and environment nodes. In the intervention process, we aim to reasonably enhance potential environment nodes. This approach is designed to align with and perceive the underlying distribution characteristics in explicit fraud networks, thereby boosting our temporal GNN\u2019s ability to identify and understand problematic nodes. Furthermore, drawing inspiration from Wang et al. (2020 ###reference_b38###), to ensure that causal interventions between nodes do not interfere with each other, we create parallel universes for each node. Consequently, the model is exposed to a wider potential data distribution, providing insights for fraud prediction with a causal perspective. This process can further be understood as a back-door adjustment in causal theory Pearl (2009 ###reference_b31###); Pearl and Mackenzie (2018 ###reference_b30###).\nThe contributions of this paper are summarized as follows:\nWe propose a novel method, CaT-GNN, that embodies both causality and resilience for the modeling of credit card fraud detection. By harnessing causal theory, known for its interpretability, CaT-GNN enables the model to encompass a wider potential data distribution, thereby ensuring its exceptional performance in this task.\nCaT-GNN, characterized by its refined simplicity, initially identifies causal nodes and subsequently refines the model into a causally coherent structure. It aims to achieve invariance in attribute information and temporal features through semi-supervised learning, thereby providing a bespoke and robust foundation for targeted tasks.\nWe evaluate CaT-GNN on three representative datasets, including a private financial benchmark, and the other two are public settings. Extensive experiments show that our proposed method outperforms the compared state-of-the-art baselines in credit card fraud detection, thanks to the casual intervention of the node causal augment."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methodology",
21
+ "text": "In Section 3.1 ###reference_###, we explore the motivation behind our approach, emphasizing the crucial role of understanding the local structure and causal relationships within transaction data to improve detection accuracy. Section 3.2 ###reference_### introduces our two-phase method: discovery and intervention. Section Section 3.2 ###reference_### provides the causal theory support."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Motivation",
27
+ "text": "###figure_2### ###figure_3### Taking the arXiv Hu et al. (2020 ###reference_b16###) dataset as an example, real-world graphs often exhibit locally variable structures, that is, the distribution of node attributes differs from the distribution of local structural properties Feng et al. (2021 ###reference_b10###). We observe that this phenomenon is also prevalent in the financial sector, where cunning fraudsters may disguise themselves through various means (such as feature camouflage and relationship disguise) to connect with users who have a good credit transaction history Dou et al. (2020 ###reference_b9###). In such scenarios, if we simply aggregate node information and neighbor information together, it is likely to obscure the suspiciousness of the fraudsters, which contradicts our objective. This situation tends to yield poorer training outcomes, especially in a semi-supervised learning environment with limited labeled data. Existing methods do not incorporate causal factors into credit card fraud modeling, resulting in models that fail to learn the intrinsic connections of node attributes. This oversight further leads to the neglect of causal attribute structure differences on test nodes, thereby reducing the model\u2019s generalizability. By comprehensively examining the confounding variables, we are able to significantly alleviate the aforementioned issue, as illustrated in Figure 2 ###reference_###. This strategy is the cornerstone of our framework and is also known as the \u201cbackdoor adjustment\u201d technique Pearl (2009 ###reference_b31###); Pearl and Mackenzie (2018 ###reference_b30###)."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Discovering & Intervention",
33
+ "text": "Based on the motivation, we adopt a causal perspective to analyze the attribute aggregation process and formalize principles for distinguishing between causal and non-causal elements within local structures. We first introduce the discovery process to effectively examine the causal and environment nodes within the current node\u2019s local structure. In response, we refine the temporal attention graph network mechanismXiang et al. (2022 ###reference_b41###) into a causal temporal GAT mechanism as shown in the upper half of Figure 3 ###reference_###. This refinement introduces two key components designed to accurately identify both environmental and causal nodes, which enhances our ability to understand and manipulate the local structural dynamics more effectively.\nIn the context of temporal transaction graphs, we maintain a set of transaction records, denoted as , alongside their embeddings obtained via a projection layer. As demonstrated in Shi et al. (2020 ###reference_b35###), GNNs are capable of concurrently propagating attributes and labels. Consequently, we integrate fraud labels as an embedded feature within the attribute embedding , employing masking techniques to prevent label leakage Xiang et al. (2023 ###reference_b42###). However, this aspect does not constitute the primary focus of our research.\nCausal-Inspector: We design a Causal-Inspector to identify causal and environment nodes as shown in the bottom left corner of Figure 3 ###reference_###. To aggregate information efficiently, we employ the aforementioned causal temporal graph attention mechanism, which allows for dynamic information flow based on the temporal relationships among transactions. Leveraging a multi-head attention mechanism, we compute temporal attention scores that serve as weights for each neighboring node, facilitating the assessment of each neighbor\u2019s causal importance, which can be formulated as follows:\nwhere is a learnable weight matrix, represents the attention weight of node with respect to node in one head, which determines the importance of node relative to node . The symbol represents the concatenation operation. is the set of temporal neighboring nodes of node . In order to quantify the importance of each node we aggregate the attention weights from each attention head and compute the average to determine the final weight of the node. Then, based on its final weight, we calculate its normalized importance:\nwhere is the total number of attention heads and represents the set of importance of each node with respect to . This formula calculates the normalized importance weight , representing the importance of node by compiling the contributions from all attention heads, thus providing a comprehensive measure of node significance.\nTo segregate the nodes into environmental and causal categories, we introduce a proportion parameter , ranging between 0 and 1, which denotes the fraction of nodes to be earmarked as environment nodes. This approach affords us the flexibility to select environment nodes tailored to the specific exigencies of the graph. We use the function to select the nodes with the lowest importance scores as environment nodes. Therefore, a ranking function is defined to map to its rank among all node importance scores. Then, we determine the environment set as:\nThe remaining nodes, those not in , naturally form the set of causal nodes .This method ensures that nodes with the lowest importance scores are precisely selected as environmenta nodes according to the proportion , while the rest serve as causal nodes. Due to the differences between test and training distributions Feng et al. (2021 ###reference_b10###), CaT-GNN is dedicated to perceiving the essence of temporal graph data, thereby enhancing generalization capabilities and robustness.\nCausal-Intervener: We design a Causal Intervener as shown in the bottom right corner of Figure 3 ###reference_###, which employs a transformative mixup strategy known as a causal mixup, that blends environment nodes with a series of causally significant nodes. Given an environmental node , We select the causal nodes with the highest importance scores, which are computed as outlined in the Causal-Inspector, from the causal set at a proportion of . The causal mixup is then executed by linearly combining the environmental node with the selected causal nodes, weighted by their respective coefficients\n, which are learned through a dedicated linear layer:\nwhere is the causally mixed environmental node, is the number of selected causal nodes, is the self-weight of the environmental node reflecting its inherent causal significance, and is the causal node weight. These weights are normalized such that . The incorporation of the causal mixup enhances the robustness of the model against distributional shifts by embedding a richer causal structure within the environmental node. By adapting the causal structure to the environmental context, the Causal-Intervener aims to mitigate the disparity between training and test distributions, thus bolstering the model\u2019s generalizability. Finally, we aggregate the information, and the outputs of multiple attention heads are concatenated to form a more comprehensive representation:\nwhere is a learnable weight matrix, is a attention head, denotes the aggregated embeddings. It is important to highlight that the causal intervention result on an environmental node with respect to is essentially a duplicate of and does not modify itself. This distinction is crucial as it guarantees that the process of augmenting central nodes within individual local structures remains mutually non-disruptive. By preserving the original state of , we ensure that enhancements applied to central nodes in one local structure do not adversely affect or interfere with those in another, maintaining the integrity and independence of local structural enhancements Wang et al. (2020 ###reference_b38###)."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Causal Support of CaT-GNN",
39
+ "text": "In elucidating the causal backbone of CaT-GNN, we invoke causal theory to formulate a Structural Causal Model (SCM) as propounded by Pearl (2009 ###reference_b31###). This framework scrutinizes four distinct elements: the inputs node attribute , the truth label decided by both the attribute of causal nodes of symbolized as , and the confounder , emblematic of the attribute of environment nodes. The causal interplay among these variables can be articulated as follows:\nXE. The local structure of node attribute is composed of causal nodes attributes and environment nodes attributes .\nYE. The causal attributes actually determine the true value , however, the environmental attributes also affect the prediction results, causing spurious associations.\nDo-calculus Pearl (2009 ###reference_b31###) is a trio of rules within the causal inference framework that facilitates the mathematical deduction of causal effects from observed data. These rules enable manipulation of operator expressions, essential for implementing interventions in causal models:\nTypically, a model that is trained using Empirical Risk Minimization (ERM) may not perform adequately when generalizing to test data distribution . These shifts in distribution are often a result of changes within environment nodes, necessitating the need to tackle the confounding effects. As illustrated in Figure 3 ###reference_###, we apply causal intervention to enhance the model\u2019s generalizability and robustness. To this end, our approach utilizes do-calculus Pearl (2009 ###reference_b31###) on the variable to negate the influence of the backdoor path by estimating :\nwhere signifies the count of environment nodes, with indicating the -th environmental variable. The environmental enhancement of Cat-GNN is in alignment with the theory of backdoor adjustment, thereby allowing for an effective exploration of possible test environment distributions."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Experiments",
45
+ "text": "In this section, we critically assess the CaT-GNN model on a series of research questions (RQs) to establish its efficacy in graph-based fraud detection tasks. The research questions are formulated as follows:\nRQ1: Does CaT-GNN outperform the current state-of-the-art models for graph-based anomaly detection?\nRQ2: What is the effectiveness of causal intervention in the aggregation of neighboring information?\nRQ3: What is the performance with respect to different environmental ratios ?\nRQ4: Is CaT-GNN equally effective in semi-supervised settings, and how does it perform with limited labeled data?\nRQ5: Does the causal intervention component lead to a significant decrease in efficiency?"
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "Experimental Setup",
51
+ "text": "we adopt one open-source inacial raud emi-supervised ataset Xiang et al. (2023 ###reference_b42###), termed S-FFSD222https://github.com/AI4Risk/antifraud, with the partially labeled transaction records. Same with the definition in section 2 ###reference_###, if a transaction is reported by a cardholder or identified by financial experts as fraudulent, the label will be 1; otherwise, will be 0. In addition, we also validate on two other public fraud detection datasets and . Rayana and Akoglu (2015 ###reference_b32###) compiles a collection of hotel and restaurant reviews from Yelp, in which nodes represent reviews. And there are three kinds of relationship edges among these reviews. : The Amazon graph McAuley and Leskovec (2013 ###reference_b29###) comprises reviews of products in the musical instruments category, in which nodes represent users, and the edges are the corresponding three kinds of relationships among reviews. The statistics of the above three datasets are shown in Table 1 ###reference_###.\nTo verify the effectiveness of our proposed CaT-GNN, we compare it with the following state-of-the-art methods. \u2776 Player2Vec. Attributed Heterogeneous Information Network Embedding Framework Zhang et al. (2019 ###reference_b45###). \u2777 Semi-GNN. A semi-supervised graph attentive network\nfor financial fraud detection that adopts the attention mechanism to aggregate node embed\ndings across graphs Wang et al. (2019 ###reference_b37###). \u2778 GraphConsis. The GNN-based fraud detectors aim at the inconsistency problem Liu et al. (2020 ###reference_b25###). \u2779 GraphSAGE. The inductive graph learning model is based on a fixed sample number of the neighbor nodes Hamilton et al. (2017 ###reference_b15###). \u277a CARE-GNN The camouflage-resistant GNN-based model tackling fraud detection Dou et al. (2020 ###reference_b9###). \u277b PC-GNN. A GNN-based model to address the issue of class imbalance in graph-based fraud detection Liu et al. (2021 ###reference_b26###). \u277c GTAN. A semi-supervised GNN-based model that utilizes a gated temporal attention mechanism to analyze credit card transaction data Xiang et al. (2023 ###reference_b42###). \u277d: CaT-GNN (PL). This variant of the CaT-GNN framework selects environment nodes based on a proportion and determines mixup weights via a learnable linear layer. \u277e: CaT-GNN (PI). This version employs a proportional selection of environment nodes and leverages the nodes\u2019 importance scores to inform mixup weights . \u277f: CaT-GNN (FL). This variant uses a fixed number of environment nodes. Mixup weights are determined by a learnable linear layer. \u277f:CaT-GNN (FI). Combining fixed environmental node selection with importance-based weighting for mixup.\nIn our experiment, the learning rate is set to 0.003, and the batch size batch is established at 256. Moreover, the input dropout ratio is determined to be 0.2, with the number of attention heads set to 4, and the hidden dimension to 256. We employed the Adam optimizer to train the model over epochs, incorporating an early stopping mechanism to prevent overfitting. In GraphConsis, CARE-GNN, PC-GNN and GTAN, we used the default parameters suggested by the original paper. In Semi-GNN and Player2Vec, We set the learning rate to 0.01. In YelpChi and Amazon datasets, the train, validation, and test ratio are set to be 40%, 20%, and 40% respectively. In the S-FFSD dataset, we use the first 7 months\u2019 transactions as training data, and the rest as test data. Similar to previous work Liu et al. (2021 ###reference_b26###), we repeat experiments with different random seeds 5 times and we report the average and standard error.\nExperimental results are statistically significant\nwith .\nCat-GNN and other baselines are all implemented in Pytorch 1.9.0 with Python 3.8. All the experiments are conducted on Ubuntu 18.04.5 LTS server with 1 NVIDIA Tesla V100 GPU, 440 GB RAM.\nWe selected three representative and extensively utilized metrics: AUC (Area Under the ROC Curve), F1-macro and AP (averaged precision). The first metric AUC is the area under the ROC Curve and as a single numerical value, AUC succinctly summarizes the classifier\u2019s overall performance across all thresholds. The second metric F1-macro is the macro average of F1 score which can be formulated as , and the third metric AP is averaged precision that can be formulated as , where stands for the Precision and stands for recall."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "Performance Comparison (RQ1)",
57
+ "text": "In the experiment of credit card fraud detection across three distinct datasets, Cat-GNN showcases superior performance metrics compared to its counterparts. First of all, Cat-GNN achieves the highest AUC in all three datasets, with values of 0.9035, 0.9706, and 0.8281 for YelpChi, Amazon, and S-FFSD, respectively. This indicates that Cat-GNN consistently outperforms other methods in distinguishing between classes across diverse datasets. Focusing on the F1 Score, which balances the precision and recall , Cat-GNN again tops the charts with scores of 0.7783, 0.9163, and 0.7211 for YelpChi, Amazon, and S-FFSD. This reflects the model\u2019s robustness in achieving high precision while not compromising on recall, which is essential where both false positives and false negatives carry significant consequences. Finally, Cat-GNN\u2019s superiority extends to the AP metric, with the improvement of at least 6.82%, 2.86%, and 13.10% for YelpChi, Amazon and S-FFSD respectively.\nThe comparative performance of Cat-GNN is particularly significant when contrasted with previous methods such as Player2Vec, Semi-GNN, and GraphSAGE. For the Amazon dataset, existing state-of-the-art models, like CARE-GNN, PC-GNN, and GTAN, have already proven effective at capturing the inherent correlations within the data. In this context, the benefits of causal intervention may not be as pronounced, possibly due to the dataset\u2019s simpler local structures and more uniform distribution. However, for the S-FFSD dataset, our methodology exhibits significant performance improvements. This enhancement is attributed to the complex local structures and the prevalence of unlabeled nodes within the dataset. In such scenarios, causal intervention adeptly learns the inherent attribute connections, thereby boosting the model\u2019s generalization. Additionally, learning mixup weights with a linear layer is more reasonable than weighting with importance scores. Similarly, selecting environment nodes based on proportions is more sensible than choosing a fixed number of environment nodes, and the effect is also slightly better. All in all, This superior performance can be ascribed to the integration of causal theory within the Cat-GNN, enhancing its capacity to comprehend the inherent principles of graph attributes, allowing it to discern complex patterns and interactions that other models are unable to effectively capture."
58
+ },
59
+ {
60
+ "section_id": "4.3",
61
+ "parent_section_id": "4",
62
+ "section_name": "Ablation Study (RQ2)",
63
+ "text": "In this section, we evaluate the effectiveness of causal interventions in the aggregation within graph structures. Initially, we explore a variant without any causal intervention, termed N-CaT, which aggregates all neighboring information indiscriminately. Secondly, we introduce D-CaT, a method that omits environment nodes entirely during the aggregation phase, and directly aggregates all neighboring information in the learning process. Finally, our proposed method, CaT, integrates a causal intervention approach, simultaneously considering both causal nodes and environment nodes during aggregation to refine the learning representations.\nThe results shown in Figure 4 ###reference_### highlight the importance of causal intervention in information aggregation. N-CaT, which lacks causal discernment, performs worse across all datasets compared to CaT because it does not account for causal relationships. D-CaT, which simply deletes environmental factors, shows a significant drop in performance, as the mere deletion of environment nodes prevents the model from fully learning valuable information. Our CaT method consistently outperforms the other variants across all datasets, achieving the highest AUC scores. This superior performance underscores the value of our causal intervention technique, which effectively balances the influence of causal and environment nodes, resulting in a more generalizable model.\n###figure_4###"
64
+ },
65
+ {
66
+ "section_id": "4.4",
67
+ "parent_section_id": "4",
68
+ "section_name": "Parameter Sensitivity Analysis (RQ3, RQ4)",
69
+ "text": "In this section, we study the model parameter sensitivity with respect to the environment nodes ratio and the training ratio. The corresponding results are reported in Figure 5 ###reference_###.\n###figure_5### ###figure_6### As demonstrated in the left of Figure 5 ###reference_###, using the YelpChi dataset as an example, the performance of Cat-GNN (measured by AUC as the performance metric) significantly surpasses other competitive models, including PC-GNN and CARE-GNN, across all training ratios, from 10% to 70%. Particularly at lower training ratios (such as 10%), Cat-GNN remains effective for semi-supervised learning and exhibits more robust performance compared to other models.\nIn our sensitivity analysis of the environmental ratio as demonstrated in the right of Figure 5 ###reference_###, we observed that Cat-GNN\u2019s performance on the Amazon dataset is less affected by variations in the training ratio, with AUC fluctuations not exceeding 2%. Conversely, on the S-FFSD dataset, as the training ratio increases from 5% to 40%, there is a larger fluctuation in Cat-GNN\u2019s performance. This can be attributed to the characteristics of the dataset or the differences in the distribution of labeled data."
70
+ },
71
+ {
72
+ "section_id": "4.5",
73
+ "parent_section_id": "4",
74
+ "section_name": "Model Efficiency (RQ5)",
75
+ "text": "In this section, we present a comprehensive analysis of the efficiency of CaT-GNN. Our causal intervention aims to boost performance while maintaining computational efficiency. Table 3 ###reference_### shows that the performance enhancements are achieved without imposing significant additional computational costs. The results indicate that the execution time with causal intervention experienced only a marginal increase. This negligible rise in time is a testament to the algorithm\u2019s ability to retain its computational efficiency while incorporating our advancements. Thus, our algorithm stands as a robust solution that can cater to the needs of high-performance computing while facilitating enhancements that do not compromise on efficiency."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Related Works",
81
+ "text": ""
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusion & Future Work",
87
+ "text": "In this work, we introduce the Causal Temporal Graph Neural Network (CaT-GNN), a causal approach in the domain of credit card fraud detection. Our model innovates by integrating causal learning principles to discern and leverage the intricate relationships within transaction data. We validate the effectiveness of CaT-GNN through comprehensive experiments on diverse datasets, where it consistently outperforms existing techniques. Notably, CaT-GNN not only enhances detection accuracy but also maintains computational efficiency, making it viable for large-scale deployment. Future directions will explore extending this methodology to a broader range of fraudulent activities, with the aim of fortifying the integrity of financial systems globally."
88
+ }
89
+ ],
90
+ "appendix": [],
91
+ "tables": {
92
+ "1": {
93
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.4.1.1\" style=\"font-size:113%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S4.T1.5.2\" style=\"font-size:113%;\">Statistics of the three datasets.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.6.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.1.1.1\"><span class=\"ltx_text\" id=\"S4.T1.6.1.1.1.1\" style=\"font-size:80%;\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.1.1.2\"><span class=\"ltx_text\" id=\"S4.T1.6.1.1.2.1\" style=\"font-size:80%;\">#Node</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.1.1.3\"><span class=\"ltx_text\" id=\"S4.T1.6.1.1.3.1\" style=\"font-size:80%;\">#Edge</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.1.1.4\"><span class=\"ltx_text\" id=\"S4.T1.6.1.1.4.1\" style=\"font-size:80%;\">#Fraud</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.1.1.5\"><span class=\"ltx_text\" id=\"S4.T1.6.1.1.5.1\" style=\"font-size:80%;\">#benigh</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.6.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.2.1.1\" style=\"padding-bottom:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.2.1.1.1\" style=\"font-size:80%;\">YelpChi</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.2.1.2\" style=\"padding-bottom:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.2.1.2.1\" style=\"font-size:80%;\">45,954</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.2.1.3\" style=\"padding-bottom:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.2.1.3.1\" style=\"font-size:80%;\">7,739,912</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.2.1.4\" style=\"padding-bottom:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.2.1.4.1\" style=\"font-size:80%;\">6,677</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.2.1.5\" style=\"padding-bottom:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.2.1.5.1\" style=\"font-size:80%;\">39,277</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.3.2.1\" style=\"padding-bottom:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.3.2.1.1\" style=\"font-size:80%;\">Amazon</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.3.2.2\" style=\"padding-bottom:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.3.2.2.1\" style=\"font-size:80%;\">11,948</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.3.2.3\" style=\"padding-bottom:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.3.2.3.1\" style=\"font-size:80%;\">8,808,728</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.3.2.4\" style=\"padding-bottom:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.3.2.4.1\" style=\"font-size:80%;\">821</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.3.2.5\" style=\"padding-bottom:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.3.2.5.1\" style=\"font-size:80%;\">11,127</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.4.3.1\"><span class=\"ltx_text\" id=\"S4.T1.6.4.3.1.1\" style=\"font-size:80%;\">S-FFSD</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.4.3.2\"><span class=\"ltx_text\" id=\"S4.T1.6.4.3.2.1\" style=\"font-size:80%;\">130,840</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.4.3.3\"><span class=\"ltx_text\" id=\"S4.T1.6.4.3.3.1\" style=\"font-size:80%;\">3,492,226</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.4.3.4\"><span class=\"ltx_text\" id=\"S4.T1.6.4.3.4.1\" style=\"font-size:80%;\">2,950</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.4.3.5\"><span class=\"ltx_text\" id=\"S4.T1.6.4.3.5.1\" style=\"font-size:80%;\">17,553</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
94
+ "capture": "Table 1: Statistics of the three datasets."
95
+ },
96
+ "2": {
97
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance Comparison (in percent \u00b1 standard deviation) on YelpChi, Amazon and S-FFSD datasets across five runs. The best performances are marked with <span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.106.1\">bold font</span>, and the second-to-best are shown <span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.107.2\">underlined.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.99\" style=\"width:433.6pt;height:160.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-99.6pt,36.8pt) scale(0.685224907555936,0.685224907555936) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.99.99\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.99.99.100.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T2.99.99.100.1.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.100.1.1.1\" style=\"font-size:90%;\">Dataset</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S4.T2.99.99.100.1.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.100.1.2.1\" style=\"font-size:90%;\">YelpChi</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S4.T2.99.99.100.1.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.100.1.3.1\" style=\"font-size:90%;\">Amazon</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S4.T2.99.99.100.1.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.100.1.4.1\" style=\"font-size:90%;\">S-FFSD</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.99.99.101.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.99.99.101.2.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.101.2.1.1\" style=\"font-size:90%;\">Metric</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.99.99.101.2.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.101.2.2.1\" style=\"font-size:90%;\">AUC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.99.99.101.2.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.101.2.3.1\" style=\"font-size:90%;\">F1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.99.99.101.2.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.101.2.4.1\" style=\"font-size:90%;\">AP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.99.99.101.2.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.101.2.5.1\" style=\"font-size:90%;\">AUC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.99.99.101.2.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.101.2.6.1\" style=\"font-size:90%;\">F1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.99.99.101.2.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.101.2.7.1\" style=\"font-size:90%;\">AP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.99.99.101.2.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.101.2.8.1\" style=\"font-size:90%;\">AUC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.99.99.101.2.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.101.2.9.1\" style=\"font-size:90%;\">F1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.99.99.101.2.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.101.2.10.1\" style=\"font-size:90%;\">AP</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.9.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.9.9.9.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.9.9.9.10.1\" style=\"font-size:90%;\">Player2Vec</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.1.1.1.1.1\" style=\"font-size:90%;\">0.7012</span><span class=\"ltx_text\" id=\"S4.T2.1.1.1.1.2\" style=\"font-size:90%;\">0.0089</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.2.2.2.2.1\" style=\"font-size:90%;\">0.4120</span><span class=\"ltx_text\" id=\"S4.T2.2.2.2.2.2\" style=\"font-size:90%;\">0.0142</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.3.3.3.3.1\" style=\"font-size:90%;\">0.2477</span><span class=\"ltx_text\" id=\"S4.T2.3.3.3.3.2\" style=\"font-size:90%;\">0.0161</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.4.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.4.4.4.4.1\" style=\"font-size:90%;\">0.6187</span><span class=\"ltx_text\" id=\"S4.T2.4.4.4.4.2\" style=\"font-size:90%;\">0.0152</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.5.5.5.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.5.5.5.5.1\" style=\"font-size:90%;\">0.2455</span><span class=\"ltx_text\" id=\"S4.T2.5.5.5.5.2\" style=\"font-size:90%;\">0.0091</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.6.6.6.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.6.6.6.6.1\" style=\"font-size:90%;\">0.1301</span><span class=\"ltx_text\" id=\"S4.T2.6.6.6.6.2\" style=\"font-size:90%;\">0.0117</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.7.7.7.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.7.7.7.7.1\" style=\"font-size:90%;\">0.5284</span><span class=\"ltx_text\" id=\"S4.T2.7.7.7.7.2\" style=\"font-size:90%;\">0.0101</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.8.8.8.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.8.8.8.8.1\" style=\"font-size:90%;\">0.2149</span><span class=\"ltx_text\" id=\"S4.T2.8.8.8.8.2\" style=\"font-size:90%;\">0.0136</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.9.9.9.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.9.9.9.9.1\" style=\"font-size:90%;\">0.2067</span><span class=\"ltx_text\" id=\"S4.T2.9.9.9.9.2\" style=\"font-size:90%;\">0.0155</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.18.18.18\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.18.18.18.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.18.18.18.10.1\" style=\"font-size:90%;\">Semi-GNN</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.10.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.10.10.10.1.1\" style=\"font-size:90%;\">0.5160</span><span class=\"ltx_text\" id=\"S4.T2.10.10.10.1.2\" style=\"font-size:90%;\">0.0154</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.11.11.11.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.11.11.11.2.1\" style=\"font-size:90%;\">0.1023</span><span class=\"ltx_text\" id=\"S4.T2.11.11.11.2.2\" style=\"font-size:90%;\">0.0216</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.12.12.12.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.12.12.12.3.1\" style=\"font-size:90%;\">0.1809</span><span class=\"ltx_text\" id=\"S4.T2.12.12.12.3.2\" style=\"font-size:90%;\">0.0205</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.13.13.13.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.13.13.13.4.1\" style=\"font-size:90%;\">0.7059</span><span class=\"ltx_text\" id=\"S4.T2.13.13.13.4.2\" style=\"font-size:90%;\">0.0211</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.14.14.14.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.14.14.14.5.1\" style=\"font-size:90%;\">0.5486</span><span class=\"ltx_text\" id=\"S4.T2.14.14.14.5.2\" style=\"font-size:90%;\">0.0105</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.15.15.15.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.15.15.15.6.1\" style=\"font-size:90%;\">0.2248</span><span class=\"ltx_text\" id=\"S4.T2.15.15.15.6.2\" style=\"font-size:90%;\">0.0142</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.16.16.16.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.16.16.16.7.1\" style=\"font-size:90%;\">0.5460</span><span class=\"ltx_text\" id=\"S4.T2.16.16.16.7.2\" style=\"font-size:90%;\">0.0125</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.17.17.17.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.17.17.17.8.1\" style=\"font-size:90%;\">0.4393</span><span class=\"ltx_text\" id=\"S4.T2.17.17.17.8.2\" style=\"font-size:90%;\">0.0152</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.18.18.18.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.18.18.18.9.1\" style=\"font-size:90%;\">0.2732</span><span class=\"ltx_text\" id=\"S4.T2.18.18.18.9.2\" style=\"font-size:90%;\">0.0207</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.27.27.27\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.27.27.27.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.27.27.27.10.1\" style=\"font-size:90%;\">GraphSAGE</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.19.19.19.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.19.19.19.1.1\" style=\"font-size:90%;\">0.5414</span><span class=\"ltx_text\" id=\"S4.T2.19.19.19.1.2\" style=\"font-size:90%;\">0.0029</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.20.20.20.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.20.20.20.2.1\" style=\"font-size:90%;\">0.4516</span><span class=\"ltx_text\" id=\"S4.T2.20.20.20.2.2\" style=\"font-size:90%;\">0.0954</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.21.21.21.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.21.21.21.3.1\" style=\"font-size:90%;\">0.1806</span><span class=\"ltx_text\" id=\"S4.T2.21.21.21.3.2\" style=\"font-size:90%;\">0.0866</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.22.22.22.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.22.22.22.4.1\" style=\"font-size:90%;\">0.7590</span><span class=\"ltx_text\" id=\"S4.T2.22.22.22.4.2\" style=\"font-size:90%;\">0.0053</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.23.23.23.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.23.23.23.5.1\" style=\"font-size:90%;\">0.5926</span><span class=\"ltx_text\" id=\"S4.T2.23.23.23.5.2\" style=\"font-size:90%;\">0.0087</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.24.24.24.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.24.24.24.6.1\" style=\"font-size:90%;\">0.6597</span><span class=\"ltx_text\" id=\"S4.T2.24.24.24.6.2\" style=\"font-size:90%;\">0.0079</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.25.25.25.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.25.25.25.7.1\" style=\"font-size:90%;\">0.6534</span><span class=\"ltx_text\" id=\"S4.T2.25.25.25.7.2\" style=\"font-size:90%;\">0.0095</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.26.26.26.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.26.26.26.8.1\" style=\"font-size:90%;\">0.5396</span><span class=\"ltx_text\" id=\"S4.T2.26.26.26.8.2\" style=\"font-size:90%;\">0.0101</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.27.27.27.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.27.27.27.9.1\" style=\"font-size:90%;\">0.3881</span><span class=\"ltx_text\" id=\"S4.T2.27.27.27.9.2\" style=\"font-size:90%;\">0.0089</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.36.36.36\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.36.36.36.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.36.36.36.10.1\" style=\"font-size:90%;\">GraphConsis</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.28.28.28.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.28.28.28.1.1\" style=\"font-size:90%;\">0.7046</span><span class=\"ltx_text\" id=\"S4.T2.28.28.28.1.2\" style=\"font-size:90%;\">0.0287</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.29.29.29.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.29.29.29.2.1\" style=\"font-size:90%;\">0.6023</span><span class=\"ltx_text\" id=\"S4.T2.29.29.29.2.2\" style=\"font-size:90%;\">0.0195</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.30.30.30.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.30.30.30.3.1\" style=\"font-size:90%;\">0.3269</span><span class=\"ltx_text\" id=\"S4.T2.30.30.30.3.2\" style=\"font-size:90%;\">0.0186</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.31.31.31.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.31.31.31.4.1\" style=\"font-size:90%;\">0.8761</span><span class=\"ltx_text\" id=\"S4.T2.31.31.31.4.2\" style=\"font-size:90%;\">0.0317</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.32.32.32.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.32.32.32.5.1\" style=\"font-size:90%;\">0.7725</span><span class=\"ltx_text\" id=\"S4.T2.32.32.32.5.2\" style=\"font-size:90%;\">0.0319</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.33.33.33.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.33.33.33.6.1\" style=\"font-size:90%;\">0.7296</span><span class=\"ltx_text\" id=\"S4.T2.33.33.33.6.2\" style=\"font-size:90%;\">0.0301</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.34.34.34.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.34.34.34.7.1\" style=\"font-size:90%;\">0.6554</span><span class=\"ltx_text\" id=\"S4.T2.34.34.34.7.2\" style=\"font-size:90%;\">0.0412</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.35.35.35.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.35.35.35.8.1\" style=\"font-size:90%;\">0.5436</span><span class=\"ltx_text\" id=\"S4.T2.35.35.35.8.2\" style=\"font-size:90%;\">0.0376</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.36.36.36.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.36.36.36.9.1\" style=\"font-size:90%;\">0.3816</span><span class=\"ltx_text\" id=\"S4.T2.36.36.36.9.2\" style=\"font-size:90%;\">0.0341</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.45.45.45\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.45.45.45.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.45.45.45.10.1\" style=\"font-size:90%;\">CARE-GNN</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.37.37.37.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.37.37.37.1.1\" style=\"font-size:90%;\">0.7745</span><span class=\"ltx_text\" id=\"S4.T2.37.37.37.1.2\" style=\"font-size:90%;\">0.0281</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.38.38.38.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.38.38.38.2.1\" style=\"font-size:90%;\">0.6252</span><span class=\"ltx_text\" id=\"S4.T2.38.38.38.2.2\" style=\"font-size:90%;\">0.0091</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.39.39.39.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.39.39.39.3.1\" style=\"font-size:90%;\">0.4238</span><span class=\"ltx_text\" id=\"S4.T2.39.39.39.3.2\" style=\"font-size:90%;\">0.0151</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.40.40.40.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.40.40.40.4.1\" style=\"font-size:90%;\">0.8998</span><span class=\"ltx_text\" id=\"S4.T2.40.40.40.4.2\" style=\"font-size:90%;\">0.0925</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.41.41.41.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.41.41.41.5.1\" style=\"font-size:90%;\">0.8468</span><span class=\"ltx_text\" id=\"S4.T2.41.41.41.5.2\" style=\"font-size:90%;\">0.0085</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.42.42.42.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.42.42.42.6.1\" style=\"font-size:90%;\">0.8117</span><span class=\"ltx_text\" id=\"S4.T2.42.42.42.6.2\" style=\"font-size:90%;\">0.0114</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.43.43.43.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.43.43.43.7.1\" style=\"font-size:90%;\">0.6589</span><span class=\"ltx_text\" id=\"S4.T2.43.43.43.7.2\" style=\"font-size:90%;\">0.1078</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.44.44.44.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.44.44.44.8.1\" style=\"font-size:90%;\">0.5725</span><span class=\"ltx_text\" id=\"S4.T2.44.44.44.8.2\" style=\"font-size:90%;\">0.0096</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.45.45.45.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.45.45.45.9.1\" style=\"font-size:90%;\">0.4004</span><span class=\"ltx_text\" id=\"S4.T2.45.45.45.9.2\" style=\"font-size:90%;\">0.0090</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.54.54.54\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.54.54.54.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.54.54.54.10.1\" style=\"font-size:90%;\">PC-GNN</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.46.46.46.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.46.46.46.1.1\" style=\"font-size:90%;\">0.7997</span><span class=\"ltx_text\" id=\"S4.T2.46.46.46.1.2\" style=\"font-size:90%;\">0.0021</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.47.47.47.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.47.47.47.2.1\" style=\"font-size:90%;\">0.6429</span><span class=\"ltx_text\" id=\"S4.T2.47.47.47.2.2\" style=\"font-size:90%;\">0.0205</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.48.48.48.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.48.48.48.3.1\" style=\"font-size:90%;\">0.4782</span><span class=\"ltx_text\" id=\"S4.T2.48.48.48.3.2\" style=\"font-size:90%;\">0.0194</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.49.49.49.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.49.49.49.4.1\" style=\"font-size:90%;\">0.9472</span><span class=\"ltx_text\" id=\"S4.T2.49.49.49.4.2\" style=\"font-size:90%;\">0.0019</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.50.50.50.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.50.50.50.5.1\" style=\"font-size:90%;\">0.8798</span><span class=\"ltx_text\" id=\"S4.T2.50.50.50.5.2\" style=\"font-size:90%;\">0.0084</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.51.51.51.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.51.51.51.6.1\" style=\"font-size:90%;\">0.8442</span><span class=\"ltx_text\" id=\"S4.T2.51.51.51.6.2\" style=\"font-size:90%;\">0.0096</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.52.52.52.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.52.52.52.7.1\" style=\"font-size:90%;\">0.6707</span><span class=\"ltx_text\" id=\"S4.T2.52.52.52.7.2\" style=\"font-size:90%;\">0.0031</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.53.53.53.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.53.53.53.8.1\" style=\"font-size:90%;\">0.6051</span><span class=\"ltx_text\" id=\"S4.T2.53.53.53.8.2\" style=\"font-size:90%;\">0.0230</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.54.54.54.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.54.54.54.9.1\" style=\"font-size:90%;\">0.4479</span><span class=\"ltx_text\" id=\"S4.T2.54.54.54.9.2\" style=\"font-size:90%;\">0.0210</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.63.63.63\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.63.63.63.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.63.63.63.10.1\" style=\"font-size:90%;\">GTAN</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.55.55.55.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.55.55.55.1.1\" style=\"font-size:90%;\">0.8675</span><span class=\"ltx_text\" id=\"S4.T2.55.55.55.1.2\" style=\"font-size:90%;\">0.0036</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.56.56.56.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.56.56.56.2.1\" style=\"font-size:90%;\">0.7254</span><span class=\"ltx_text\" id=\"S4.T2.56.56.56.2.2\" style=\"font-size:90%;\">0.0197</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.57.57.57.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.57.57.57.3.1\" style=\"font-size:90%;\">0.6425</span><span class=\"ltx_text\" id=\"S4.T2.57.57.57.3.2\" style=\"font-size:90%;\">0.0154</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.58.58.58.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.58.58.58.4.1\" style=\"font-size:90%;\">0.9580</span><span class=\"ltx_text\" id=\"S4.T2.58.58.58.4.2\" style=\"font-size:90%;\">0.0014</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.59.59.59.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.59.59.59.5.1\" style=\"font-size:90%;\">0.8954</span><span class=\"ltx_text\" id=\"S4.T2.59.59.59.5.2\" style=\"font-size:90%;\">0.0095</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.60.60.60.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.60.60.60.6.1\" style=\"font-size:90%;\">0.8718</span><span class=\"ltx_text\" id=\"S4.T2.60.60.60.6.2\" style=\"font-size:90%;\">0.0083</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.61.61.61.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.61.61.61.7.1\" style=\"font-size:90%;\">0.7496</span><span class=\"ltx_text\" id=\"S4.T2.61.61.61.7.2\" style=\"font-size:90%;\">0.0041</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.62.62.62.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.62.62.62.8.1\" style=\"font-size:90%;\">0.6714</span><span class=\"ltx_text\" id=\"S4.T2.62.62.62.8.2\" style=\"font-size:90%;\">0.0089</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.63.63.63.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.63.63.63.9.1\" style=\"font-size:90%;\">0.5709</span><span class=\"ltx_text\" id=\"S4.T2.63.63.63.9.2\" style=\"font-size:90%;\">0.0097</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.72.72.72\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.72.72.72.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.72.72.72.10.1\" style=\"font-size:90%;\">Cat-GNN(FI)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.64.64.64.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.64.64.64.1.1\" style=\"font-size:90%;\">0.8721</span><span class=\"ltx_text\" id=\"S4.T2.64.64.64.1.2\" style=\"font-size:90%;\">0.0044</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.65.65.65.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.65.65.65.2.1\" style=\"font-size:90%;\">0.7336</span><span class=\"ltx_text\" id=\"S4.T2.65.65.65.2.2\" style=\"font-size:90%;\">0.0295</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.66.66.66.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.66.66.66.3.1\" style=\"font-size:90%;\">0.6528</span><span class=\"ltx_text\" id=\"S4.T2.66.66.66.3.2\" style=\"font-size:90%;\">0.0209</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.67.67.67.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.67.67.67.4.1\" style=\"font-size:90%;\">0.9643</span><span class=\"ltx_text\" id=\"S4.T2.67.67.67.4.2\" style=\"font-size:90%;\">0.0026</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.68.68.68.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.68.68.68.5.1\" style=\"font-size:90%;\">0.9011</span><span class=\"ltx_text\" id=\"S4.T2.68.68.68.5.2\" style=\"font-size:90%;\">0.0129</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.69.69.69.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.69.69.69.6.1\" style=\"font-size:90%;\">0.8794</span><span class=\"ltx_text\" id=\"S4.T2.69.69.69.6.2\" style=\"font-size:90%;\">0.0102</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.70.70.70.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.70.70.70.7.1\" style=\"font-size:90%;\">0.7643</span><span class=\"ltx_text\" id=\"S4.T2.70.70.70.7.2\" style=\"font-size:90%;\">0.0078</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.71.71.71.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.71.71.71.8.1\" style=\"font-size:90%;\">0.6907</span><span class=\"ltx_text\" id=\"S4.T2.71.71.71.8.2\" style=\"font-size:90%;\">0.0198</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.72.72.72.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.72.72.72.9.1\" style=\"font-size:90%;\">0.5925</span><span class=\"ltx_text\" id=\"S4.T2.72.72.72.9.2\" style=\"font-size:90%;\">0.0174</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.81.81.81\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.81.81.81.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.81.81.81.10.1\" style=\"font-size:90%;\">Cat-GNN(FL)</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.73.73.73.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.73.73.73.1.1\" style=\"font-size:90%;\">0.89100.0026</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.74.74.74.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.74.74.74.2.1\" style=\"font-size:90%;\">0.7692</span><span class=\"ltx_text\" id=\"S4.T2.74.74.74.2.2\" style=\"font-size:90%;\">0.0182</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.75.75.75.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.75.75.75.3.1\" style=\"font-size:90%;\">0.6687</span><span class=\"ltx_text\" id=\"S4.T2.75.75.75.3.2\" style=\"font-size:90%;\">0.0135</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.76.76.76.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.76.76.76.4.1\" style=\"font-size:90%;\">0.97050.0016</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.77.77.77.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.77.77.77.5.1\" style=\"font-size:90%;\">0.91250.0099</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.78.78.78.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.78.78.78.6.1\" style=\"font-size:90%;\">0.89420.0081</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.79.79.79.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.79.79.79.7.1\" style=\"font-size:90%;\">0.8023</span><span class=\"ltx_text\" id=\"S4.T2.79.79.79.7.2\" style=\"font-size:90%;\">0.0067</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.80.80.80.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.80.80.80.8.1\" style=\"font-size:90%;\">0.7031</span><span class=\"ltx_text\" id=\"S4.T2.80.80.80.8.2\" style=\"font-size:90%;\">0.0154</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.81.81.81.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.81.81.81.9.1\" style=\"font-size:90%;\">0.6145</span><span class=\"ltx_text\" id=\"S4.T2.81.81.81.9.2\" style=\"font-size:90%;\">0.0169</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.90.90.90\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.90.90.90.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.90.90.90.10.1\" style=\"font-size:90%;\">Cat-GNN(PI)</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.82.82.82.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.82.82.82.1.1\" style=\"font-size:90%;\">0.8895</span><span class=\"ltx_text\" id=\"S4.T2.82.82.82.1.2\" style=\"font-size:90%;\">0.0041</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.83.83.83.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.83.83.83.2.1\" style=\"font-size:90%;\">0.77060.0223</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.84.84.84.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.84.84.84.3.1\" style=\"font-size:90%;\">0.67010.0181</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.85.85.85.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.85.85.85.4.1\" style=\"font-size:90%;\">0.9669</span><span class=\"ltx_text\" id=\"S4.T2.85.85.85.4.2\" style=\"font-size:90%;\">0.0021</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.86.86.86.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.86.86.86.5.1\" style=\"font-size:90%;\">0.9077</span><span class=\"ltx_text\" id=\"S4.T2.86.86.86.5.2\" style=\"font-size:90%;\">0.0113</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.87.87.87.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.87.87.87.6.1\" style=\"font-size:90%;\">0.8896</span><span class=\"ltx_text\" id=\"S4.T2.87.87.87.6.2\" style=\"font-size:90%;\">0.0095</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.88.88.88.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.88.88.88.7.1\" style=\"font-size:90%;\">0.81450.0061</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.89.89.89.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.89.89.89.8.1\" style=\"font-size:90%;\">0.70960.0149</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.90.90.90.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.90.90.90.9.1\" style=\"font-size:90%;\">0.62940.0166</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.99.99.99\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T2.99.99.99.10\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.99.99.99.10.1\" style=\"font-size:90%;\">Cat-GNN(PL)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.91.91.91.1\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.91.91.91.1.1\" style=\"font-size:90%;\">0.90350.0035</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.92.92.92.2\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.92.92.92.2.1\" style=\"font-size:90%;\">0.77830.0209</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.93.93.93.3\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.93.93.93.3.1\" style=\"font-size:90%;\">0.68630.0127</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.94.94.94.4\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.94.94.94.4.1\" style=\"font-size:90%;\">0.97060.0015</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.95.95.95.5\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.95.95.95.5.1\" style=\"font-size:90%;\">0.91630.0104</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.96.96.96.6\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.96.96.96.6.1\" style=\"font-size:90%;\">0.89750.0089</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.97.97.97.7\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.97.97.97.7.1\" style=\"font-size:90%;\">0.82810.0054</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.98.98.98.8\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.98.98.98.8.1\" style=\"font-size:90%;\">0.72110.0115</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.99.99.99.9\" style=\"padding-left:3.0pt;padding-right:3.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.99.99.99.9.1\" style=\"font-size:90%;\">0.64570.0156</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
98
+ "capture": "Table 2: Performance Comparison (in percent \u00b1 standard deviation) on YelpChi, Amazon and S-FFSD datasets across five runs. The best performances are marked with bold font, and the second-to-best are shown underlined."
99
+ },
100
+ "3": {
101
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T3.2.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S4.T3.3.2\" style=\"font-size:90%;\">Experimental run times with and without causal intervention on three datasets. The experiments were conducted on a Tesla V100 40GB GPU, with the execution times measured in seconds.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.4\" style=\"width:433.6pt;height:63.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(33.6pt,-5.0pt) scale(1.1834041774021,1.1834041774021) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.4.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.4.1.1.1.2\">YelpChi</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.4.1.1.1.3\">Amazon</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.4.1.1.1.4\">S-FFSD</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.1.2.1.1\" style=\"padding-bottom:2.0pt;\">No-intervention</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.1.2.1.2\" style=\"padding-bottom:2.0pt;\">126.676</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.1.2.1.3\" style=\"padding-bottom:2.0pt;\">110.518</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.1.2.1.4\" style=\"padding-bottom:2.0pt;\">208.085</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.4.1.3.2.1\" style=\"padding-bottom:2.0pt;\">Causal-intervention</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.4.1.3.2.2\" style=\"padding-bottom:2.0pt;\">129.481 <span class=\"ltx_text\" id=\"S4.T3.4.1.3.2.2.1\" style=\"color:#0000FF;\">(+2.21%)</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.4.1.3.2.3\" style=\"padding-bottom:2.0pt;\">113.660 <span class=\"ltx_text\" id=\"S4.T3.4.1.3.2.3.1\" style=\"color:#0000FF;\">(+2.84%)</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.4.1.3.2.4\" style=\"padding-bottom:2.0pt;\">213.341 <span class=\"ltx_text\" id=\"S4.T3.4.1.3.2.4.1\" style=\"color:#0000FF;\">(+2.52%)</span>\n</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
102
+ "capture": "Table 3: Experimental run times with and without causal intervention on three datasets. The experiments were conducted on a Tesla V100 40GB GPU, with the execution times measured in seconds."
103
+ }
104
+ },
105
+ "image_paths": {
106
+ "1": {
107
+ "figure_path": "2402.14708v2_figure_1.png",
108
+ "caption": "Figure 1: The model overview. First Stage (discovery): we utilize an attention map in the attention temporal network to identify causal nodes and environment nodes. Second Stage: Intervention, we apply causal mix-up enhancement to the environment nodes.",
109
+ "url": "http://arxiv.org/html/2402.14708v2/x1.png"
110
+ },
111
+ "2": {
112
+ "figure_path": "2402.14708v2_figure_2.png",
113
+ "caption": "Figure 2: Motivation. The original prediction incorrectly identifies a fraudster (central node labeled xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT) as benign, as does the state-of-the-art GTAN model. Following our causal intervention, the prediction is correctly adjusted to identify xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as a fraudster. Green: benign users, red: fraudsters, gray: unlabeled nodes.",
114
+ "url": "http://arxiv.org/html/2402.14708v2/x2.png"
115
+ },
116
+ "3": {
117
+ "figure_path": "2402.14708v2_figure_3.png",
118
+ "caption": "Figure 3: The depiction of the proposed model\u2019s architecture, featuring a causal temporal graph attention mechanism, alongside the theoretical support for backdoor adjustment.",
119
+ "url": "http://arxiv.org/html/2402.14708v2/x3.png"
120
+ },
121
+ "4": {
122
+ "figure_path": "2402.14708v2_figure_4.png",
123
+ "caption": "Figure 4: The ablation study results on three datasets. Gray bars represent the D-CaT variant, blue bars represent the N-CaT variant, and orange bars represent the CaT-GNN model.",
124
+ "url": "http://arxiv.org/html/2402.14708v2/x4.png"
125
+ },
126
+ "5(a)": {
127
+ "figure_path": "2402.14708v2_figure_5(a).png",
128
+ "caption": "Figure 5: Sensitivity analysis with respect to different training ratios (Left) and environment ratios (Right).",
129
+ "url": "http://arxiv.org/html/2402.14708v2/x5.png"
130
+ },
131
+ "5(b)": {
132
+ "figure_path": "2402.14708v2_figure_5(b).png",
133
+ "caption": "Figure 5: Sensitivity analysis with respect to different training ratios (Left) and environment ratios (Right).",
134
+ "url": "http://arxiv.org/html/2402.14708v2/x6.png"
135
+ }
136
+ },
137
+ "validation": true,
138
+ "references": [
139
+ {
140
+ "1": {
141
+ "title": "Computing graph neural networks: A survey from algorithms to accelerators.",
142
+ "author": "Sergi Abadal, Akshay Jain, Robert Guirado, Jorge L\u00f3pez-Alonso, and Eduard Alarc\u00f3n.",
143
+ "venue": "ACM Computing Surveys (CSUR), 54(9):1\u201338, 2021.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "2": {
149
+ "title": "A simple model of bank bankruptcies.",
150
+ "author": "Agata Aleksiejuk and Janusz A Ho\u0142yst.",
151
+ "venue": "Physica A: Statistical Mechanics and its Applications, 299(1-2):198\u2013204, 2001.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "3": {
157
+ "title": "Conceptual building of sustainable economic growth and corporate bankruptcy.",
158
+ "author": "Latifa AlFalahi and Haitham Nobanee.",
159
+ "venue": "Available at SSRN 3472409, 2019.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "4": {
165
+ "title": "Data mining for credit card fraud: A comparative study.",
166
+ "author": "Siddhartha Bhattacharyya, Sanjeev Jha, Kurian Tharakunnel, and J Christopher Westland.",
167
+ "venue": "Decision support systems, 50(3):602\u2013613, 2011.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "5": {
173
+ "title": "Invariant rationalization.",
174
+ "author": "Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola.",
175
+ "venue": "In International Conference on Machine Learning, pages 1448\u20131458. PMLR, 2020.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "6": {
181
+ "title": "Graph neural network for fraud detection via spatial-temporal attention.",
182
+ "author": "Dawei Cheng, Xiaoyang Wang, Ying Zhang, and Liqing Zhang.",
183
+ "venue": "IEEE Transactions on Knowledge and Data Engineering, 34(8):3800\u20133813, 2020.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "7": {
189
+ "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks.",
190
+ "author": "Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh.",
191
+ "venue": "In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 257\u2013266, 2019.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "8": {
197
+ "title": "Learning steady-states of iterative algorithms over graphs.",
198
+ "author": "Hanjun Dai, Zornitsa Kozareva, Bo Dai, Alex Smola, and Le Song.",
199
+ "venue": "In International conference on machine learning, pages 1106\u20131114. PMLR, 2018.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "9": {
205
+ "title": "Enhancing graph neural network-based fraud detectors against camouflaged fraudsters.",
206
+ "author": "Yingtong Dou, Zhiwei Liu, Li Sun, Yutong Deng, Hao Peng, and Philip S Yu.",
207
+ "venue": "In Proceedings of the 29th ACM international conference on information & knowledge management, pages 315\u2013324, 2020.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "10": {
213
+ "title": "Should graph convolution trust neighbors? a simple causal inference method.",
214
+ "author": "Fuli Feng, Weiran Huang, Xiangnan He, Xin Xin, Qifan Wang, and Tat-Seng Chua.",
215
+ "venue": "In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1208\u20131218, 2021.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "11": {
221
+ "title": "Using generative adversarial networks for improving classification effectiveness in credit card fraud detection.",
222
+ "author": "Ugo Fiore, Alfredo De Santis, Francesca Perla, Paolo Zanetti, and Francesco Palmieri.",
223
+ "venue": "Information Sciences, 479:448\u2013455, 2019.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "12": {
229
+ "title": "Credit card fraud detection using convolutional neural networks.",
230
+ "author": "Kang Fu, Dawei Cheng, Yi Tu, and Liqing Zhang.",
231
+ "venue": "In Neural Information Processing: 23rd International Conference, ICONIP 2016, Kyoto, Japan, October 16\u201321, 2016, Proceedings, Part III 23, pages 483\u2013490. Springer, 2016.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "13": {
237
+ "title": "Graph echo state networks.",
238
+ "author": "Claudio Gallicchio and Alessio Micheli.",
239
+ "venue": "In The 2010 international joint conference on neural networks (IJCNN), pages 1\u20138. IEEE, 2010.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "14": {
245
+ "title": "Attention based spatial-temporal graph convolutional networks for traffic flow forecasting.",
246
+ "author": "Shengnan Guo, Youfang Lin, Ning Feng, Chao Song, and Huaiyu Wan.",
247
+ "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 922\u2013929, 2019.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "15": {
253
+ "title": "Inductive representation learning on large graphs.",
254
+ "author": "Will Hamilton, Zhitao Ying, and Jure Leskovec.",
255
+ "venue": "Advances in neural information processing systems, 30, 2017.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "16": {
261
+ "title": "Open graph benchmark: Datasets for machine learning on graphs.",
262
+ "author": "Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec.",
263
+ "venue": "Advances in neural information processing systems, 33:22118\u201322133, 2020.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "17": {
269
+ "title": "A machine learning based credit card fraud detection using the ga algorithm for feature selection.",
270
+ "author": "Emmanuel Ileberi, Yanxia Sun, and Zenghui Wang.",
271
+ "venue": "Journal of Big Data, 9(1):1\u201317, 2022.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "18": {
277
+ "title": "Suspicious behavior detection: Current trends and future directions.",
278
+ "author": "Meng Jiang, Peng Cui, and Christos Faloutsos.",
279
+ "venue": "IEEE intelligent systems, 31(1):31\u201339, 2016.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "19": {
285
+ "title": "Uncertainty quantification via spatial-temporal tweedie model for zero-inflated and long-tail travel demand prediction.",
286
+ "author": "Xinke Jiang, Dingyi Zhuang, Xianghui Zhang, Hao Chen, Jiayuan Luo, and Xiaowei Gao.",
287
+ "venue": "In CIKM, 2023.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "20": {
293
+ "title": "Incomplete graph learning via attribute-structure decoupled variational auto-encoder.",
294
+ "author": "Xinke Jiang, Zidi Qin, Jiarong Xu, and Xiang Ao.",
295
+ "venue": "In WSDM, 2024.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "21": {
301
+ "title": "Gated graph sequence neural networks.",
302
+ "author": "Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel.",
303
+ "venue": "arXiv preprint arXiv:1511.05493, 2015.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "22": {
309
+ "title": "Adaptive graph convolutional neural networks.",
310
+ "author": "Ruoyu Li, Sheng Wang, Feiyun Zhu, and Junzhou Huang.",
311
+ "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "23": {
317
+ "title": "Mining spatio-temporal relations via self-paced graph contrastive learning.",
318
+ "author": "Rongfan Li, Ting Zhong, Xinke Jiang, Goce Trajcevski, Jin Wu, and Fan Zhou.",
319
+ "venue": "In SIGKDD, 2022.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "24": {
325
+ "title": "Heterogeneous graph neural networks for malicious account detection.",
326
+ "author": "Ziqi Liu, Chaochao Chen, Xinxing Yang, Jun Zhou, Xiaolong Li, and Le Song.",
327
+ "venue": "In Proceedings of the 27th ACM international conference on information and knowledge management, pages 2077\u20132085, 2018.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "25": {
333
+ "title": "Alleviating the inconsistency problem of applying graph neural network to fraud detection.",
334
+ "author": "Zhiwei Liu, Yingtong Dou, Philip S Yu, Yutong Deng, and Hao Peng.",
335
+ "venue": "In Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, pages 1569\u20131572, 2020.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "26": {
341
+ "title": "Pick and choose: a gnn-based imbalanced learning approach for fraud detection.",
342
+ "author": "Yang Liu, Xiang Ao, Zidi Qin, Jianfeng Chi, Jinghua Feng, Hao Yang, and Qing He.",
343
+ "venue": "In Proceedings of the web conference 2021, pages 3168\u20133177, 2021.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "27": {
349
+ "title": "Towards robust and adaptive motion forecasting: A causal representation perspective.",
350
+ "author": "Yuejiang Liu, Riccardo Cadei, Jonas Schweizer, Sherwin Bahmani, and Alexandre Alahi.",
351
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17081\u201317092, 2022.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "28": {
357
+ "title": "Credit card fraud detection using bayesian and neural networks.",
358
+ "author": "Sam Maes, Karl Tuyls, Bram Vanschoenwinkel, and Bernard Manderick.",
359
+ "venue": "In Proceedings of the 1st international naiso congress on neuro fuzzy technologies, volume 261, page 270, 2002.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "29": {
365
+ "title": "From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews.",
366
+ "author": "Julian John McAuley and Jure Leskovec.",
367
+ "venue": "In Proceedings of the 22nd international conference on World Wide Web, pages 897\u2013908, 2013.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "30": {
373
+ "title": "The book of why: the new science of cause and effect.",
374
+ "author": "Judea Pearl and Dana Mackenzie.",
375
+ "venue": "Basic books, 2018.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "31": {
381
+ "title": "Causality.",
382
+ "author": "Judea Pearl.",
383
+ "venue": "Cambridge university press, 2009.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "32": {
389
+ "title": "Collective opinion spam detection: Bridging review networks and metadata.",
390
+ "author": "Shebuti Rayana and Leman Akoglu.",
391
+ "venue": "In Proceedings of the 21th acm sigkdd international conference on knowledge discovery and data mining, pages 985\u2013994, 2015.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "33": {
397
+ "title": "Detecting credit card fraud by decision trees and support vector machines.",
398
+ "author": "Yusuf G \u015eahin and Ekrem Duman.",
399
+ "venue": "2011.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "34": {
405
+ "title": "The graph neural network model.",
406
+ "author": "Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.",
407
+ "venue": "IEEE transactions on neural networks, 20(1):61\u201380, 2008.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "35": {
413
+ "title": "Masked label prediction: Unified message passing model for semi-supervised classification.",
414
+ "author": "Yunsheng Shi, Zhengjie Huang, Shikun Feng, Hui Zhong, Wenjin Wang, and Yu Sun.",
415
+ "venue": "arXiv preprint arXiv:2009.03509, 2020.",
416
+ "url": null
417
+ }
418
+ },
419
+ {
420
+ "36": {
421
+ "title": "Graph attention networks.",
422
+ "author": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio.",
423
+ "venue": "arXiv preprint arXiv:1710.10903, 2017.",
424
+ "url": null
425
+ }
426
+ },
427
+ {
428
+ "37": {
429
+ "title": "A semi-supervised graph attentive network for financial fraud detection.",
430
+ "author": "Daixin Wang, Jianbin Lin, Peng Cui, Quanhui Jia, Zhen Wang, Yanming Fang, Quan Yu, Jun Zhou, Shuang Yang, and Yuan Qi.",
431
+ "venue": "In 2019 IEEE International Conference on Data Mining (ICDM), pages 598\u2013607. IEEE, 2019.",
432
+ "url": null
433
+ }
434
+ },
435
+ {
436
+ "38": {
437
+ "title": "Nodeaug: Semi-supervised node classification with data augmentation.",
438
+ "author": "Yiwei Wang, Wei Wang, Yuxuan Liang, Yujun Cai, Juncheng Liu, and Bryan Hooi.",
439
+ "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 207\u2013217, 2020.",
440
+ "url": null
441
+ }
442
+ },
443
+ {
444
+ "39": {
445
+ "title": "Graph wavenet for deep spatial-temporal graph modeling.",
446
+ "author": "Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, and Chengqi Zhang.",
447
+ "venue": "arXiv preprint arXiv:1906.00121, 2019.",
448
+ "url": null
449
+ }
450
+ },
451
+ {
452
+ "40": {
453
+ "title": "A comprehensive survey on graph neural networks.",
454
+ "author": "Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip.",
455
+ "venue": "IEEE transactions on neural networks and learning systems, 32(1):4\u201324, 2020.",
456
+ "url": null
457
+ }
458
+ },
459
+ {
460
+ "41": {
461
+ "title": "Temporal and heterogeneous graph neural network for financial time series prediction.",
462
+ "author": "Sheng Xiang, Dawei Cheng, Chencheng Shang, Ying Zhang, and Yuqi Liang.",
463
+ "venue": "In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 3584\u20133593, 2022.",
464
+ "url": null
465
+ }
466
+ },
467
+ {
468
+ "42": {
469
+ "title": "Semi-supervised credit card fraud detection via attribute-driven graph representation.",
470
+ "author": "Sheng Xiang, Mingzhi Zhu, Dawei Cheng, Enxia Li, Ruihui Zhao, Yi Ouyang, Ling Chen, and Yefeng Zheng.",
471
+ "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 14557\u201314565, 2023.",
472
+ "url": null
473
+ }
474
+ },
475
+ {
476
+ "43": {
477
+ "title": "How powerful are graph neural networks?",
478
+ "author": "Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka.",
479
+ "venue": "arXiv preprint arXiv:1810.00826, 2018.",
480
+ "url": null
481
+ }
482
+ },
483
+ {
484
+ "44": {
485
+ "title": "Spatial temporal graph convolutional networks for skeleton-based action recognition.",
486
+ "author": "Sijie Yan, Yuanjun Xiong, and Dahua Lin.",
487
+ "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.",
488
+ "url": null
489
+ }
490
+ },
491
+ {
492
+ "45": {
493
+ "title": "Key player identification in underground forums over attributed heterogeneous information network embedding framework.",
494
+ "author": "Yiming Zhang, Yujie Fan, Yanfang Ye, Liang Zhao, and Chuan Shi.",
495
+ "venue": "In Proceedings of the 28th ACM international conference on information and knowledge management, pages 549\u2013558, 2019.",
496
+ "url": null
497
+ }
498
+ },
499
+ {
500
+ "46": {
501
+ "title": "Dual graph convolutional networks for graph-based semi-supervised classification.",
502
+ "author": "Chenyi Zhuang and Qiang Ma.",
503
+ "venue": "In Proceedings of the 2018 world wide web conference, pages 499\u2013508, 2018.",
504
+ "url": null
505
+ }
506
+ }
507
+ ],
508
+ "url": "http://arxiv.org/html/2402.14708v2"
509
+ }
20241127/2403.05441v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2403.14494v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2403.16790v2.json ADDED
@@ -0,0 +1,417 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Iso-Diffusion: Improving Diffusion Probabilistic Models Using the Isotropy of the Additive Gaussian Noise",
3
+ "abstract": "Denoising Diffusion Probabilistic Models (DDPMs) have accomplished much in the realm of generative AI. With the tremendous level of popularity the Generative AI algorithms have achieved, the demand for higher levels of performance continues to increase. Under this backdrop, careful scrutinization of algorithm performance under sample fidelity type measures is essential to ascertain how, effectively, the underlying structures of the data distribution were learned. In this context, minimizing the mean squared error between the additive and predicted noise alone does not impose structural integrity constraints on the predicted noise, for instance, isotropic. Under this premise, we were motivated to utilize the isotropy of the additive noise as a constraint on the objective function to enhance the fidelity of DDPMs. Our approach is simple and can be applied to any DDPM variant. We validate our approach by presenting experiments conducted on four synthetic 2D datasets as well as on unconditional image generation. As demonstrated by the results, the incorporation of this constraint improves the fidelity metrics, Precision and Density, and the results clearly indicate how the structural imposition was effective.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "###figure_1### Diffusion models have been accomplishing great feats in the realm of generative AI, specifically in terms of unconditional and conditional image generation ([19 ###reference_b19###], [9 ###reference_b9###], [26 ###reference_b26###], [18 ###reference_b18###], [22 ###reference_b22###], [24 ###reference_b24###], [6 ###reference_b6###],\n[10 ###reference_b10###],\n[1 ###reference_b1###]). Starting with the revolutionary paper by Ho et al. [7 ###reference_b7###] and the improvements by Nichol et al. [19 ###reference_b19###] as well as the Latent Diffusion Model by Rombach et al. [24 ###reference_b24###], these models have had the biggest impact in this context. The fidelity and diversity of the images generated by these models are surprisingly amazing. Yet, as with all models, these models can still be improved upon closer inspection. As with the improvements done by Nichol et al. [19 ###reference_b19###] to the original Denoising Diffusion Probabilistic Model (DDPM) by introducing techniques such as the cosine-based variance schedule and allowing the model to learn the variance rather than keeping it fixed helped improve the performance of DDPMs. Our goal in this paper is to make a similar contribution with regard to the improvement of the important fidelity metrics, Density [16 ###reference_b16###] and Precision [13 ###reference_b13###], by imposing possible regularizations that promote the modified DDPM algorithm to learn the underlying structures, diversity, modality and density spread of the true distribution.\nAlthough DDPMs perform well, we noticed that these existing models do not necessarily incorporate any distributional (structural) information about the particular dataset it tries to generate. Typically, the DDPM\u2019s forward process gradually pushes the dataset towards an isotropic Gaussian, which can be thought of as the structural vanishing point of the data distribution [17 ###reference_b17###]. This implies a well-placed point of origin for the generative process (reverse path) from a point of complete lack of structure toward the final destination, which is the data distribution. In the DDPM implementation, the learning process considers the expected squared norm difference between the additive Gaussian noise and the predicted noise as its objective function. Therefore, for the generative process, to enhance the aforementioned creation of structure, the objective function can be modified to include any structural measure, such as isotropy.\nThus, we were motivated to include the isotropic nature of the additive Gaussian noise when optimizing for the objective to further enhance the statistical properties of the predicted noise through backpropagation. The current objective function of the DDPM does not include any mechanism that explicitly encourages the isotropic nature of the predicted noise. Therefore, a mechanism that guarantees the model progresses from a more non-isotropic distribution (distributions with multiple modes, discontinuities or non-uniformities of density distributions, non-uniformly distributed spatial structures) to an isotropic Gaussian distribution toward the vanishing point in a structured and learned manner is needed. Our intuition is that by capturing the statistical properties of the noise in more detail, the model will be able to produce higher-fidelity samples as it would have much more information regarding the distributional structure of the samples.\nAs the rationale for introducing isotropy to the objective function has been established, now, let us see how isotropy establishes convergence and quantifies structural information about the distribution. For example, the isotropy of an isotropic random vector in is the expected squared norm of that vector, which is equal to its dimension, [34 ###reference_b34###]. This establishes the convergence in the limit for a normalized distribution with a complete lack of structure, which in other words is isotropic. Conversely, the desired distribution, which has more structure and is more non-isotropic, would consequently have a lower isotropy value. This implies that the generative process, in its drive towards a structural distribution, minimizes isotropy. Furthermore, when analyzing the mean square error objective, we observed that incorporating the isotropic nature of the noise effectively makes the objective function equal to zero in expectation.\nThe inclusion of this constraint does not incur a large computational cost and can be readily applied to any of the diffusion model variants. In this work, we scrutinized the DDPM model\u2019s behavioral aspects to interpret its functionality using well-defined 2D synthetic datasets, such as Swiss Roll, Scattered Moon, Moon with two circles and Central Banana, to draw fundamental conclusions about DDPM algorithm. Furthermore, we experimented on four 2D synthetic datasets with our modified objective function and showed that the fidelity metrics, in particular the Precision and Density, improved significantly. In addition, we validate our approach to unconditional image generation using the Oxford Flower [20 ###reference_b20###] and Oxford-IIIT Pet [21 ###reference_b21###], CIFAR-10 [12 ###reference_b12###] and CIFAR-100 [12 ###reference_b12###] datasets. We compare the fidelity and diversity of the generated samples based on key evaluation metrics such as Precision and Recall [13 ###reference_b13###], Density and Coverage\n[16 ###reference_b16###], Frechet Inception Distance (FID) [5 ###reference_b5###] and Inception Score (IS) [28 ###reference_b28###].\nThe contributions of this work are as follows:\nWe introduce Iso-Diffusion: a modified approach that introduces an isotropic constraint on the predicted noise objective function to steer the generative process in a structurally coherent manner. This results in improved fidelity of the generated data distribution. We believe, to the best of our knowledge, that we are the first to propose such a modified loss based on the structural properties of the noise.\nWe analyze the simple loss function in the DDPM and its connection to isotropy. Moreover, we show that the isotropy of the data distribution monotonically increases and converges to the maximum isotropy value, which corresponds to an isotropic Gaussian distribution. This confirms that the definition of isotropy mentioned in this paper, conveys information about the structure of the data distribution when the data distribution undergoes the forward process in DDPMs.\nWe evaluate and validate our approach on four 2D synthetic datasets as well as on the task of unconditional image generation on Oxford Flower, Oxford-IIIT Pet, CIFAR-10 and CIFAR-100 datasets. Considering the key evaluation metrics, such as Precision, Recall, Density, Coverage, FID and IS, the modified objective is able to surpass the original DDPM with a significant gap in terms of the fidelity metrics, Density and Precision.\nWe conduct an in-depth analysis of the Density and Coverage metrics to evaluate the generative capabilities of Iso-Diffusion compared to DDPM. This analysis facilitates a detailed comparison between the generated and true data distributions, visually illustrating Iso-Diffusion\u2019s superior alignment with the true distribution. Furthermore, it highlights the importance of these metrics for assessing generative AI algorithms in computer vision applications."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "Generative models, particularly in recent years, have gained significant momentum due to their applications in various fields. They began with specific use cases and have evolved along a clear trajectory, as outlined below.\nDeep Generative Models Generative models (GANs [3 ###reference_b3###], VAEs [11 ###reference_b11###], flow-based models [23 ###reference_b23###], and diffusion models [7 ###reference_b7###]) learn the probability distribution of given data, allowing us to sample new data points from the distribution. Deep generative models have been used for generating images, videos [8 ###reference_b8###], 3d objects [15 ###reference_b15###], etc. Moreover, these models have been used for inverse problem solving [33 ###reference_b33###] [14 ###reference_b14###] and to understanding the latent representations of the distributions.\nDiffusion Models Diffusion models, in particular, have been making huge improvements and have been used in many domains due to their high generative capabilities. There are mainly two types of diffusion models, one is the Score based approach introduced by Song and Ermon [32 ###reference_b32###] and the other, which is the focus of this work, is the one introduced by Ho et al. [7 ###reference_b7###]. Both modeling types have been able to achieve state-of-the-art performance in generative modeling tasks and have motivated the growth of many subsequent works in generative models.\nImproving Diffusion Models In the context of DDPMs [7 ###reference_b7###], there have been several seminal papers that have contributed to the improvement of these models. In particular, Nichol et al.\u2019s [19 ###reference_b19###] work presented several key insights into how one could improve the training of these models. One such insight is the use of a cosine-based variance schedule rather than the linear variance schedule used by Ho et al. [7 ###reference_b7###]. These changes were able to improve the DDPM further.\nHowever, most of these improvements were focused on improving the models based on the most widely used metrics for image generation, FID and IS. But some of the recent work ([13 ###reference_b13###], [16 ###reference_b16###], [25 ###reference_b25###]), in generative models has pointed out that FID and IS are not necessarily indicative of the actual fidelity of the samples generated by generative models. Thus, researchers have been focusing on finding other metrics, such as Precision and Density, to assess the fidelity of these generated samples [2 ###reference_b2###], [30 ###reference_b30###]. In particular, we observed that the Density takes the local context (measuring how close it is to densely packed samples of the true distribution) of a sample into account during its calculation. We believe that this makes the Density a vital metric to assess the samples\u2019 fidelity."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Background",
21
+ "text": "###figure_2### ###figure_3### Diffusion probabilistic models were first introduced by Sohl-Dickstein et al. [31 ###reference_b31###] These models fall under the category of generative models which learn the distribution of the data so that they can sample from these data distributions. However, it was not until Ho et al. [7 ###reference_b7###] that Diffusion Probabilistic Models took off. In the next few subsections, we will provide a brief overview of the DDPM definitions that will be useful to understanding our work."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Definitions",
27
+ "text": "In the DDPM, we simply add a Gaussian noise, which varies according to a specific variance schedule, . The noise at each time-step corrupts the data, such that by the time the time-step reaches its final value, , the data will be mapped to an almost isotropic Gaussian distribution. However, the learning occurs when we try to learn the reverse process by which we try to denoise along the same trajectory starting from the almost isotropic Gaussian distribution. The first process, in which we add noise, is called the forward process and the latter, in which we denoise, is called the reverse process. The forward process is often characterized by and the reverse process by . Both of which are modeled as Gaussian distributions.\nThe forward process is defined as follows,\nMoreoever, by introducing as well as the forward process can be further simplified into the following expression via the re-parametrization trick [11 ###reference_b11###]. Since,\nwhere, .\nThe reverse process, given by , can be obtained in terms of the forward process distribution and Baye\u2019s Theorem. However, the reverse process only becomes tractable when the posterior distribution , is conditioned on the input data . Thus, during training, the model tries to learn the tractable distribution. This distribution, which is also a Gaussian distribution, is defined by the following equation and parameters."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Training Process",
33
+ "text": "To train, however, one could make the model predict the mean of the reverse process distribution at each time step. But Ho et al. [7 ###reference_b7###] mentions that predicting the additive noise, , leads to better results. The additive noise and the mean of the reverse process distribution at each time step are elegantly linked by equations 5 ###reference_### and 8 ###reference_###. This results in the following re-parametrization of ,\nTherefore, predicting the additive noise , is adequate for the task of predicting the mean of the backward process distribution. Moreover, since the forward process\u2019 variance schedule is fixed, the reverse process variance, , is also assumed to be fixed according to .\nThus, Ho et al. [7 ###reference_b7###] proposes to optimize the following simple objective function during the training process.\nwhere is the predicted noise."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "Hidden Statistical Properties of",
39
+ "text": "As discussed earlier, one of the main objectives of the proposed algorithm is to identify a learnable isotropic measure that best reflects the overall isotropic nature of the learned samples. This will allow backpropagation to accurately guide the model toward the maximum isotropy at the vanishing point. The identified metric (11 ###reference_###) is the expected squared norm of which will be named isotropy.\nUpon closer inspection of the objective function, we see that the objective of the U-Net is to minimize the mean squared error between and . Yet, if the simple loss is expanded further, a rather informative mathematical expression can be obtained.\nNow, since we know that , it is an isotropic distribution. Thus, by definition, since is an isotropic random vector in , the expected norm of the random vector, .\nFurthermore, since the goal is to predict the noise as accurately as possible, should also be distributed according to an isotropic Gaussian distribution, i.e., . Hence, if and are both identical isotropic random vectors,"
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Analysis on the Isotropy of",
45
+ "text": "Based on the observations, we were inspired to find out further implications of imposing structural information in the DDPM. As it turns out, we were able to gain more interesting insights about the forward process of the DDPM. For example, if we consider equation 5 ###reference_### and consider the isotropy, expected squared norm of , we see that,\nHowever, since is an isotropic Gaussian random vector, it is isotropic. Moreover, by assuming that it is independent of the distribution of , when is non-isotropic, we see that,\nTherefore,\nThus, when the input data are normalized and they are distributed according to a non-isotropic distribution, we note that the maximum of the expected squared norm of , . Hence, . Thus, during the forward process, since , the expected squared norm of can be at most , and attains the maximum value at the final time-step .\nMoreover, when we consider two consecutive time steps, and , we see that,\nWe know that and that . Thus,\nTherefore, for any particular normalized data distribution, we see that during the forward process, the isotropy of the data distribution increases. Finally, converges to the isotropy of an isotropic random Gaussian vector, when the data distribution completely converts into an isotropic Gaussian distribution (see Figure 2 ###reference_###). Hence, the definition of isotropy given in this paper aligns perfectly with the fact that the isotropy quantifies structural information about the data distribution."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "Isotropy Based Loss Function",
51
+ "text": "In the default DDPM model, the variance schedule drives the transformation toward an isotropic Gaussian distribution by restricting the degrees of freedom for the movement of information of the distribution, without using backpropagation to adaptively learn the degree of isotropy achieved, making it, a non-learnable process. Armed with the above analyses, we proceeded to modify the objective function to include a regularization term which penalizes the model, if the model predicts a noise which is not necessarily isotropic. Hence, the new modified objective function we propose to optimize is,\nwhere is the regularization parameter.\nHowever, this modified objective needs to be further simplified so as to make this new error, independent of the size of the dimension of the random vector. Thus, we make the following modification during implementation."
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "Interpret the Evaluation Metrics",
57
+ "text": "Precision denotes the fraction of generated data that lies in the true manifold, by counting whether each generated data point falls within a neighborhood sphere of real samples. This measure reflects how closely the generated points align with the true manifold [13 ###reference_b13###], [27 ###reference_b27###].\nRecall denotes the fraction of true data that lies in the generated manifold, by counting whether each true data point falls within a neighborhood sphere of generated samples. This measure indicates how well the true points align with the generated manifold [13 ###reference_b13###], [27 ###reference_b27###].\nDensity counts the number of neighborhood spheres of real samples that encompass each generated data point. This allows Density to reward generated samples located in areas densely populated by real samples, reducing sensitivity to outliers. This enables to consider the local context of a distribution by measuring how close a sample is to densely packed points in the true distribution [16 ###reference_b16###].\nCoverage measures the fraction of real samples whose neighborhoods contain at least one generated sample. Moreover, Coverage measures the diversity of a distribution by assessing whether all aspects of the distribution are represented. However, the presence of sparse outliers in the true manifold and the absence of the generated samples near those outliers may result in low Coverage [16 ###reference_b16###]. (see Figure 3 ###reference_###)"
58
+ },
59
+ {
60
+ "section_id": "7",
61
+ "parent_section_id": null,
62
+ "section_name": "Experiments",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "7.1",
67
+ "parent_section_id": "7",
68
+ "section_name": "Experimental Setup",
69
+ "text": "To validate our approach we consider 2D synthetic data as well as images. For the 2D data, we utilized a conditional dense network consisting of 3 fully-connected hidden layers with ReLU activations. The learning rate was fixed at 1e-3. All the datasets were learned using 1000 time-steps and 1000 epochs.\nFor the image datasets, we consider the same version of the U-Net utilized in the original DDPM implementation with a learning rate of 2e-4. The U-Net training involved 1000 time-steps and 1000 epochs for the Oxford Flower and Oxford-IIIT Pet datasets, while the CIFAR-10 and CIFAR-100 datasets were trained with 2000 epochs.\nMetrics are reported as the average of 3 training runs per dataset, with PRDC values calculated using k=5 nearest neighbors for each dataset. Moreover, all the experiments were run on one Quadro GV-100 GPU with 32GB of VRAM."
70
+ },
71
+ {
72
+ "section_id": "7.2",
73
+ "parent_section_id": "7",
74
+ "section_name": "Performance comparison between DDPM and Iso-Diffusion",
75
+ "text": "To compare the performance between DDPM and Iso-Diffusion, the modified loss function was utilized in all four 2D synthetic datasets, Oxford Flower dataset, Oxford IIIT Pet dataset and CIFAR-10 dataset. Precision, Recall, Density along with Coverage were used to evaluate and compare the performance of the two models on 2D synthetic datasets. In addition to those four evaluation metrics, FID\nand IS were used to evaluate the quality of the generated samples for the image datasets Oxford Flower, Oxford-IIIT Pet, CIFAR-10 and CIFAR-100.\nTable 1 ###reference_### demonstrates the comparison between the best performing isotropy based model (Iso-Diffusion) and DDPM in terms of the generative model\u2019s evaluation metrics along with the percentage change from DDPM. Across all these 2D synthetic datasets we observed that the fidelity metrics, Precision and Density have been improved in Iso-Diffusion. The results of Table 2 ###reference_### further confirm the improvements made by our modified loss on the quality of image samples. The Density of the generated images has been significantly improved for all four datasets. Moreover, the FID score has been significantly improved in the CIFAR-10 dataset by the proposed method.\nAlthough the performance of the modified loss function has been able to produce sample which surpass the original DDPM\u2019s samples quality, the quality depends on the regularization parameter of the modified loss function. In particular, we performed a few more experiments by considering a range of values for the regularization parameter.\nThe metrics for the Oxford Flower dataset and Oxford-IIIT-Pet dataset with different values of the regularization parameter ranging from 0.01 to 0.30 are tabulated in Table 3 ###reference_### and Table 4 ###reference_###. We see that the fidelity metrics, Precision and Density, gradually improve with the increase of the regularization parameter. However, we can see that the diversity metrics, Recall and Coverage, gradually decline with the parameter.\nAlthough the FID and IS are considered to be the most widely used evaluation metrics for assessing image generation, we see that in the case of all four datasets, they convey little to no discerning information about the generative ability of the proposed method and the original DDPM. But, by using other metrics such as the Precision, Recall, Density and Coverage, we can state that while our proposed method suffers a bit in terms of Recall, the generated samples, are very close to being real (see Figure 1 ###reference_###), as indicated by the improvements in the Precision and Density metrics."
76
+ },
77
+ {
78
+ "section_id": "7.3",
79
+ "parent_section_id": "7",
80
+ "section_name": "Interpretation of the results of 2D data distributions using PRDC values",
81
+ "text": "###figure_4### ###figure_5### ###figure_6### We believe that the disparity in the changes of Precision, Recall, Density and Coverage is a direct consequence of imposing a structural constraint on the objective function. It is evident that by focusing on the structure or the isotropy of the distribution, our method is capable of capturing highly dense mode regions and generating samples near them rather than being too diverse. Thus, it increases the fidelity but decreases the diversity of the generated samples.\nAs illustrated in the Figure 4 ###reference_###(a), the Central Banana distribution was designed by introducing a distinct mode to the main structure of the distribution resulting in a multimodal distribution with a density gradient. Once, it is generated via Iso-Diffusion as indicated in Figure 4 ###reference_###(c), it is evident that,\nIso-Diffusion, is capable of capturing the main structure even with the discontinuities of the density gradient. However, the illustrations show that DDPM lacks the capability of capturing the discontinuity in the density gradient between the tail end of the main distribution and the distinct mode. Instead, it tries to generate data points that are actually not even in the true distribution by interpolating along the main lobe\u2019s trend (see Figure 4 ###reference_###(b)). Moreover, the limited capability to capture the discontinuity in the density gradient of DDPM can be further observed in the Swiss Roll distribution as well (see Figure 6 ###reference_###(b) and 6 ###reference_###(c)). The increase in Density and decrease in Coverage for the datasets Swiss Roll and Central Banana are clear evidence for the aforementioned observations. Hence, it is limited in ability to capture the underlying structure of the distribution. Additionally, there is a noticeable trend of generating data points (blue points in Figure 4 ###reference_###(b), 4 ###reference_###(c)), outside the boundaries of the highly dense regions of the main lobe. This effect is likely due to the model\u2019s focus on these high-density regions. However, compared to DDPM, Iso-Diffusion effectively regulates the overgeneration of data points outside the boundaries of densely packed regions. This improvement is likely a result of the added regularization in the improved object function, which encourages capturing the main semantics of the true distribution.\nAs illustrated in the Figure 5 ###reference_###(a), the Scattered Moon distribution was designed by imposing scattered noise around the main structure of the data distribution. Once, it is generated via Iso-Diffusion as indicated in the Figure 5 ###reference_###(c), it is evident that, the model has tried to only capture the underlying semantics of the distribution without being susceptible to the low probable regions. Whereas, the DDPM model shows limitations in capturing the distinction between the main structure and the scattered noise (see Figure 5 ###reference_###(b)). The increased Density and reduced Coverage values support this observation. This shows that the proposed objective function, enforces the generated samples to contain properties that push them to be closely linked to the real data. Thus, we can directly observe an improvement in the Density metric as it measures the sample fidelity. We believe that in the context of unconditional image generation, the isotropy based objective function helps the U-Net learn to keep the generated samples closer to the high-density regions of the ground-truth distribution.\nThese observations highlight the proposed algorithm\u2019s ability to increase Density by focusing on the dense regions of the true distribution. At the same time, the absence of generated data in the neighborhoods of low probable data points in the true distribution may result in a reduction in Coverage. When scattering is minimal, Coverage remains consistent. This indicates that the algorithm effectively captures the main structure of the true distribution without extending into low probable regions. Also, each of these metrics has their own utility depending on the application [13 ###reference_b13###]. Thus, this should motivate the research community to propose new evaluation metrics such as Density, which is a much more meaningful measure of fidelity over FID and IS, to assess generative models.\nPreserving the modality of a data distribution is essential, as failing to capture it can lead to a loss of semantic details or edge information, both of which represent high-level features in computer vision and image processing tasks [4 ###reference_b4###], [29 ###reference_b29###]"
82
+ },
83
+ {
84
+ "section_id": "8",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusion",
87
+ "text": "Denoising Diffusion Probabilistic Models have achieved state-of-the-art performance in generative modeling tasks such as unconditional image generation and image super-resolution. However, these models can still be improved upon and much work has been put into improving them. In this paper, we propose another improvement method that is built on the premise that since the distribution that the forward process terminates and the reverse process initiates at an isotropic Gaussian distribution, which is void of any structure, it is well motivated, that the incorporation of isotropy as a measure of structure on the loss function will improve the DDPMs\u2019 generated sample fidelity. We, theoretically, show that isotropy is well a defined metric to measure the structure of the distribution during the forward process and the proposed modification helps the DDPM to converge to better solutions based on the simple modified loss. We show that the Iso-Diffusion objective function regulates data point generation, aligning it with the prominent structures of the true distribution and anchoring the process to dense, information-rich modes. Finally, we validate and show that our modified objective function improves DDPM performance through experiments on 2D synthetic datasets and unconditional image generation, supported by an in-depth analysis and evidenced by improved fidelity and diversity metrics."
88
+ }
89
+ ],
90
+ "appendix": [],
91
+ "tables": {
92
+ "1": {
93
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T1.2.1.1.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"S5.T1.2.1.1.1.1\" style=\"width:45.0pt;\">\n<span class=\"ltx_p\" id=\"S5.T1.2.1.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T1.2.1.1.1.1.1.1\" style=\"font-size:90%;\">Metrics</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.2.1.1.2\"><span class=\"ltx_text\" id=\"S5.T1.2.1.1.2.1\" style=\"font-size:90%;\">Swiss Roll</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.2.1.1.3\"><span class=\"ltx_text\" id=\"S5.T1.2.1.1.3.1\" style=\"font-size:90%;\">Scattered Moon</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.2.1.1.4\"><span class=\"ltx_text\" id=\"S5.T1.2.1.1.4.1\" style=\"font-size:90%;\">Moon with two circles</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.2.1.1.5\"><span class=\"ltx_text\" id=\"S5.T1.2.1.1.5.1\" style=\"font-size:90%;\">Central Banana</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T1.2.2.2.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2.2\"><span class=\"ltx_text\" id=\"S5.T1.2.2.2.2.1\" style=\"font-size:90%;\">DDPM</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2.3\"><span class=\"ltx_text\" id=\"S5.T1.2.2.2.3.1\" style=\"font-size:90%;\">Ours</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2.4\"><span class=\"ltx_text\" id=\"S5.T1.2.2.2.4.1\" style=\"font-size:90%;\">DDPM</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2.5\"><span class=\"ltx_text\" id=\"S5.T1.2.2.2.5.1\" style=\"font-size:90%;\">Ours</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2.6\"><span class=\"ltx_text\" id=\"S5.T1.2.2.2.6.1\" style=\"font-size:90%;\">DDPM</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2.7\"><span class=\"ltx_text\" id=\"S5.T1.2.2.2.7.1\" style=\"font-size:90%;\">Ours</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2.8\"><span class=\"ltx_text\" id=\"S5.T1.2.2.2.8.1\" style=\"font-size:90%;\">DDPM</span></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2.9\"><span class=\"ltx_text\" id=\"S5.T1.2.2.2.9.1\" style=\"font-size:90%;\">Ours</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.2.3.1.1\"><span class=\"ltx_text\" id=\"S5.T1.2.3.1.1.1\" style=\"font-size:90%;\">Precision</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.3.1.2\"><span class=\"ltx_text\" id=\"S5.T1.2.3.1.2.1\" style=\"font-size:90%;\">0.9458</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.3.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.3.1.3.1\" style=\"font-size:90%;\">0.9893 (+4.60%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.3.1.4\"><span class=\"ltx_text\" id=\"S5.T1.2.3.1.4.1\" style=\"font-size:90%;\">0.9990</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.3.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.3.1.5.1\" style=\"font-size:90%;\">0.9993 (+0.03%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.3.1.6\"><span class=\"ltx_text\" id=\"S5.T1.2.3.1.6.1\" style=\"font-size:90%;\">0.9921</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.3.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.3.1.7.1\" style=\"font-size:90%;\">0.9982 (+0.61%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.3.1.8\"><span class=\"ltx_text\" id=\"S5.T1.2.3.1.8.1\" style=\"font-size:90%;\">0.8974</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T1.2.3.1.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.3.1.9.1\" style=\"font-size:90%;\">0.9072 (+1.09%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.2.4.2.1\"><span class=\"ltx_text\" id=\"S5.T1.2.4.2.1.1\" style=\"font-size:90%;\">Recall</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.4.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.4.2.2.1\" style=\"font-size:90%;\">0.9927</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.4.2.3\"><span class=\"ltx_text\" id=\"S5.T1.2.4.2.3.1\" style=\"font-size:90%;\">0.9709 (-2.19%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.4.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.4.2.4.1\" style=\"font-size:90%;\">0.9962</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.4.2.5\"><span class=\"ltx_text\" id=\"S5.T1.2.4.2.5.1\" style=\"font-size:90%;\">0.9736 (-2.27%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.4.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.4.2.6.1\" style=\"font-size:90%;\">0.9967</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.4.2.7\"><span class=\"ltx_text\" id=\"S5.T1.2.4.2.7.1\" style=\"font-size:90%;\">0.9694 (-2.74%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.4.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.4.2.8.1\" style=\"font-size:90%;\">0.9977</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S5.T1.2.4.2.9\"><span class=\"ltx_text\" id=\"S5.T1.2.4.2.9.1\" style=\"font-size:90%;\">0.9417 (-5.61%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T1.2.5.3.1\"><span class=\"ltx_text\" id=\"S5.T1.2.5.3.1.1\" style=\"font-size:90%;\">Density</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.5.3.2\"><span class=\"ltx_text\" id=\"S5.T1.2.5.3.2.1\" style=\"font-size:90%;\">0.8946</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.5.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.5.3.3.1\" style=\"font-size:90%;\">0.9908 (+10.75%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.5.3.4\"><span class=\"ltx_text\" id=\"S5.T1.2.5.3.4.1\" style=\"font-size:90%;\">1.0015</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.5.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.5.3.5.1\" style=\"font-size:90%;\">1.0049 (+0.34%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.5.3.6\"><span class=\"ltx_text\" id=\"S5.T1.2.5.3.6.1\" style=\"font-size:90%;\">0.9925</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.5.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.5.3.7.1\" style=\"font-size:90%;\">1.0081 (+1.57%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.2.5.3.8\"><span class=\"ltx_text\" id=\"S5.T1.2.5.3.8.1\" style=\"font-size:90%;\">0.8785</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S5.T1.2.5.3.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.5.3.9.1\" style=\"font-size:90%;\">0.8962 (+2.01%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T1.2.6.4.1\"><span class=\"ltx_text\" id=\"S5.T1.2.6.4.1.1\" style=\"font-size:90%;\">Coverage</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.2.6.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.6.4.2.1\" style=\"font-size:90%;\">0.8932</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.2.6.4.3\"><span class=\"ltx_text\" id=\"S5.T1.2.6.4.3.1\" style=\"font-size:90%;\">0.8458 (-5.31%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.2.6.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.6.4.4.1\" style=\"font-size:90%;\">0.9605</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.2.6.4.5\"><span class=\"ltx_text\" id=\"S5.T1.2.6.4.5.1\" style=\"font-size:90%;\">0.8254 (-14.07%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.2.6.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.6.4.6.1\" style=\"font-size:90%;\">0.9498</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.2.6.4.7\"><span class=\"ltx_text\" id=\"S5.T1.2.6.4.7.1\" style=\"font-size:90%;\">0.8572 (-9.75%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.2.6.4.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.6.4.8.1\" style=\"font-size:90%;\">0.9102</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S5.T1.2.6.4.9\"><span class=\"ltx_text\" id=\"S5.T1.2.6.4.9.1\" style=\"font-size:90%;\">0.6840 (-24.85%)</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T1.3.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S5.T1.4.2\" style=\"font-size:90%;\">Comparison of Evaluation Metrics for the two methods: DDPM and Iso-Diffusion for the 2D Datasets</span></figcaption>\n</figure>",
94
+ "capture": "Table 1: Comparison of Evaluation Metrics for the two methods: DDPM and Iso-Diffusion for the 2D Datasets"
95
+ },
96
+ "2": {
97
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T2.6.7.1.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"S5.T2.6.7.1.1.1\" style=\"width:45.0pt;\">\n<span class=\"ltx_p\" id=\"S5.T2.6.7.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.1.1.1.1\" style=\"font-size:90%;\">Metrics</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T2.6.7.1.2\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.2.1\" style=\"font-size:90%;\">Oxford Flower</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T2.6.7.1.3\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.3.1\" style=\"font-size:90%;\">Oxford-IIIT-Pet</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T2.6.7.1.4\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.4.1\" style=\"font-size:90%;\">CIFAR-10</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T2.6.7.1.5\"><span class=\"ltx_text\" id=\"S5.T2.6.7.1.5.1\" style=\"font-size:90%;\">CIFAR-100</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.8.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T2.6.8.2.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.8.2.2\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.2.1\" style=\"font-size:90%;\">DDPM</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.8.2.3\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.3.1\" style=\"font-size:90%;\">Ours</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.8.2.4\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.4.1\" style=\"font-size:90%;\">DDPM</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.8.2.5\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.5.1\" style=\"font-size:90%;\">Ours</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.8.2.6\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.6.1\" style=\"font-size:90%;\">DDPM</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.8.2.7\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.7.1\" style=\"font-size:90%;\">Ours</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.8.2.8\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.8.1\" style=\"font-size:90%;\">DDPM</span></th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.8.2.9\"><span class=\"ltx_text\" id=\"S5.T2.6.8.2.9.1\" style=\"font-size:90%;\">Ours</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.1.1.1\">\n<span class=\"ltx_text\" id=\"S5.T2.1.1.1.1\" style=\"font-size:90%;\">FID (</span><span class=\"ltx_text\" id=\"S5.T2.1.1.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.2\"><span class=\"ltx_text\" id=\"S5.T2.1.1.2.1\" style=\"font-size:90%;\">55.590</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.3.1\" style=\"font-size:90%;\">47.310 (-14.9%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.4\"><span class=\"ltx_text\" id=\"S5.T2.1.1.4.1\" style=\"font-size:90%;\">34.087</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.5.1\" style=\"font-size:90%;\">31.900 (-6.4%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.6\"><span class=\"ltx_text\" id=\"S5.T2.1.1.6.1\" style=\"font-size:90%;\">16.023</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.7.1\" style=\"font-size:90%;\">11.872 (-25.9%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.8\"><span class=\"ltx_text\" id=\"S5.T2.1.1.8.1\" style=\"font-size:90%;\">14.794</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.9.1\" style=\"font-size:90%;\">14.141 (-4.4%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.2.2.1\">\n<span class=\"ltx_text\" id=\"S5.T2.2.2.1.1\" style=\"font-size:90%;\">IS (</span><span class=\"ltx_text\" id=\"S5.T2.2.2.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.2.2.2\"><span class=\"ltx_text\" id=\"S5.T2.2.2.2.1\" style=\"font-size:90%;\">3.097</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.2.2.3.1\" style=\"font-size:90%;\">3.504 (+13.1%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.2.2.4\"><span class=\"ltx_text\" id=\"S5.T2.2.2.4.1\" style=\"font-size:90%;\">7.083</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.2.2.5.1\" style=\"font-size:90%;\">7.531 (+6.3%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.2.2.6\"><span class=\"ltx_text\" id=\"S5.T2.2.2.6.1\" style=\"font-size:90%;\">8.463</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.2.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.2.2.7.1\" style=\"font-size:90%;\">8.482 (+0.2%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.2.2.8\"><span class=\"ltx_text\" id=\"S5.T2.2.2.8.1\" style=\"font-size:90%;\">9.032</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S5.T2.2.2.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.2.2.9.1\" style=\"font-size:90%;\">9.183 (+1.7%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.3.3.1\">\n<span class=\"ltx_text\" id=\"S5.T2.3.3.1.1\" style=\"font-size:90%;\">Precision (</span><span class=\"ltx_text\" id=\"S5.T2.3.3.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.2\"><span class=\"ltx_text\" id=\"S5.T2.3.3.2.1\" style=\"font-size:90%;\">0.725</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.3.3.1\" style=\"font-size:90%;\">0.944 (+30.3%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.4\"><span class=\"ltx_text\" id=\"S5.T2.3.3.4.1\" style=\"font-size:90%;\">0.819</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.3.5.1\" style=\"font-size:90%;\">0.954 (+16.5%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.6\"><span class=\"ltx_text\" id=\"S5.T2.3.3.6.1\" style=\"font-size:90%;\">0.607</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.3.7.1\" style=\"font-size:90%;\">0.689 (+13.6%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.3.8\"><span class=\"ltx_text\" id=\"S5.T2.3.3.8.1\" style=\"font-size:90%;\">0.638</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.3.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.3.9.1\" style=\"font-size:90%;\">0.710 (+11.4%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.4.4.1\">\n<span class=\"ltx_text\" id=\"S5.T2.4.4.1.1\" style=\"font-size:90%;\">Recall (</span><span class=\"ltx_text\" id=\"S5.T2.4.4.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.4.4.2.1\" style=\"font-size:90%;\">0.184</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.3\"><span class=\"ltx_text\" id=\"S5.T2.4.4.3.1\" style=\"font-size:90%;\">0.056 (-69.8%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.4.4.4.1\" style=\"font-size:90%;\">0.152</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.5\"><span class=\"ltx_text\" id=\"S5.T2.4.4.5.1\" style=\"font-size:90%;\">0.063 (-58.4%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.4.4.6.1\" style=\"font-size:90%;\">0.447</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.7\"><span class=\"ltx_text\" id=\"S5.T2.4.4.7.1\" style=\"font-size:90%;\">0.384 (-14.0%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.4.4.8.1\" style=\"font-size:90%;\">0.398</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S5.T2.4.4.9\"><span class=\"ltx_text\" id=\"S5.T2.4.4.9.1\" style=\"font-size:90%;\">0.350 (-12.1%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.5.5.1\">\n<span class=\"ltx_text\" id=\"S5.T2.5.5.1.1\" style=\"font-size:90%;\">Density (</span><span class=\"ltx_text\" id=\"S5.T2.5.5.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.2\"><span class=\"ltx_text\" id=\"S5.T2.5.5.2.1\" style=\"font-size:90%;\">2.632</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.3.1\" style=\"font-size:90%;\">11.039 (+319.4%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.4\"><span class=\"ltx_text\" id=\"S5.T2.5.5.4.1\" style=\"font-size:90%;\">6.704</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.5.1\" style=\"font-size:90%;\">15.778 (+135.4%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.6\"><span class=\"ltx_text\" id=\"S5.T2.5.5.6.1\" style=\"font-size:90%;\">1.401</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.7.1\" style=\"font-size:90%;\">2.104 (+50.3%)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.8\"><span class=\"ltx_text\" id=\"S5.T2.5.5.8.1\" style=\"font-size:90%;\">1.479</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S5.T2.5.5.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.5.5.9.1\" style=\"font-size:90%;\">2.190 (+48.1%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T2.6.6.1\">\n<span class=\"ltx_text\" id=\"S5.T2.6.6.1.1\" style=\"font-size:90%;\">Coverage (</span><span class=\"ltx_text\" id=\"S5.T2.6.6.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.6.6.2\"><span class=\"ltx_text\" id=\"S5.T2.6.6.2.1\" style=\"font-size:90%;\">0.959</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.3.1\" style=\"font-size:90%;\">0.994 (+3.6%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.6.6.4\"><span class=\"ltx_text\" id=\"S5.T2.6.6.4.1\" style=\"font-size:90%;\">0.9996</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.6.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.5.1\" style=\"font-size:90%;\">0.9999 (+0.03%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.6.6.6\"><span class=\"ltx_text\" id=\"S5.T2.6.6.6.1\" style=\"font-size:90%;\">0.987</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.6.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.6.7.1\" style=\"font-size:90%;\">0.995 (+0.8%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.6.6.8\"><span class=\"ltx_text\" id=\"S5.T2.6.6.8.1\" style=\"font-size:90%;\">0.9996</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S5.T2.6.6.9\"><span class=\"ltx_text\" id=\"S5.T2.6.6.9.1\" style=\"font-size:90%;\">0.9996 (0.0%)</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.8.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.9.2\" style=\"font-size:90%;\">Comparison of Evaluation Metrics for the two methods: DDPM and Iso-Diffusion for the Image Datasets.</span></figcaption>\n</figure>",
98
+ "capture": "Table 2: Comparison of Evaluation Metrics for the two methods: DDPM and Iso-Diffusion for the Image Datasets."
99
+ },
100
+ "3": {
101
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S7.T3.10\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S7.T3.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S7.T3.6.6.7\"><span class=\"ltx_text\" id=\"S7.T3.6.6.7.1\" style=\"font-size:90%;\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.1.1.1\">\n<span class=\"ltx_text\" id=\"S7.T3.1.1.1.1\" style=\"font-size:90%;\">FID (</span><span class=\"ltx_text\" id=\"S7.T3.1.1.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.2.2.2\">\n<span class=\"ltx_text\" id=\"S7.T3.2.2.2.1\" style=\"font-size:90%;\">IS (</span><span class=\"ltx_text\" id=\"S7.T3.2.2.2.2\" style=\"font-size:90%;\">)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.3.3.3\">\n<span class=\"ltx_text\" id=\"S7.T3.3.3.3.1\" style=\"font-size:90%;\">Precision (</span><span class=\"ltx_text\" id=\"S7.T3.3.3.3.2\" style=\"font-size:90%;\">)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.4.4.4\">\n<span class=\"ltx_text\" id=\"S7.T3.4.4.4.1\" style=\"font-size:90%;\">Recall (</span><span class=\"ltx_text\" id=\"S7.T3.4.4.4.2\" style=\"font-size:90%;\">)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.5.5.5\">\n<span class=\"ltx_text\" id=\"S7.T3.5.5.5.1\" style=\"font-size:90%;\">Density (</span><span class=\"ltx_text\" id=\"S7.T3.5.5.5.2\" style=\"font-size:90%;\">)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T3.6.6.6\">\n<span class=\"ltx_text\" id=\"S7.T3.6.6.6.1\" style=\"font-size:90%;\">Coverage (</span><span class=\"ltx_text\" id=\"S7.T3.6.6.6.2\" style=\"font-size:90%;\">)</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T3.10.11.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S7.T3.10.11.1.1\"><span class=\"ltx_text\" id=\"S7.T3.10.11.1.1.1\" style=\"font-size:90%;\">DDPM</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.10.11.1.2\"><span class=\"ltx_text\" id=\"S7.T3.10.11.1.2.1\" style=\"font-size:90%;\">55.5900</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.10.11.1.3\"><span class=\"ltx_text\" id=\"S7.T3.10.11.1.3.1\" style=\"font-size:90%;\">3.0970</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.10.11.1.4\"><span class=\"ltx_text\" id=\"S7.T3.10.11.1.4.1\" style=\"font-size:90%;\">0.7248</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.10.11.1.5\"><span class=\"ltx_text\" id=\"S7.T3.10.11.1.5.1\" style=\"font-size:90%;\">0.1840</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.10.11.1.6\"><span class=\"ltx_text\" id=\"S7.T3.10.11.1.6.1\" style=\"font-size:90%;\">2.6320</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T3.10.11.1.7\"><span class=\"ltx_text\" id=\"S7.T3.10.11.1.7.1\" style=\"font-size:90%;\">0.9588</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T3.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T3.7.7.1\">\n<span class=\"ltx_text\" id=\"S7.T3.7.7.1.1\" style=\"font-size:90%;\">Ours </span><span class=\"ltx_text\" id=\"S7.T3.7.7.1.2\" style=\"font-size:90%;\"> = 0.01</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.7.7.2\"><span class=\"ltx_text\" id=\"S7.T3.7.7.2.1\" style=\"font-size:90%;\">53.3374</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.7.7.3\"><span class=\"ltx_text\" id=\"S7.T3.7.7.3.1\" style=\"font-size:90%;\">3.2023</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.7.7.4\"><span class=\"ltx_text\" id=\"S7.T3.7.7.4.1\" style=\"font-size:90%;\">0.7839</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.7.7.5\"><span class=\"ltx_text\" id=\"S7.T3.7.7.5.1\" style=\"font-size:90%;\">0.1570</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.7.7.6\"><span class=\"ltx_text\" id=\"S7.T3.7.7.6.1\" style=\"font-size:90%;\">3.3445</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.7.7.7\"><span class=\"ltx_text\" id=\"S7.T3.7.7.7.1\" style=\"font-size:90%;\">0.9758</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T3.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T3.8.8.1\">\n<span class=\"ltx_text\" id=\"S7.T3.8.8.1.1\" style=\"font-size:90%;\">Ours </span><span class=\"ltx_text\" id=\"S7.T3.8.8.1.2\" style=\"font-size:90%;\"> = 0.05</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.2\"><span class=\"ltx_text\" id=\"S7.T3.8.8.2.1\" style=\"font-size:90%;\">54.7064</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.3\"><span class=\"ltx_text\" id=\"S7.T3.8.8.3.1\" style=\"font-size:90%;\">3.2208</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.4\"><span class=\"ltx_text\" id=\"S7.T3.8.8.4.1\" style=\"font-size:90%;\">0.733</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.5\"><span class=\"ltx_text\" id=\"S7.T3.8.8.5.1\" style=\"font-size:90%;\">0.17625</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.6\"><span class=\"ltx_text\" id=\"S7.T3.8.8.6.1\" style=\"font-size:90%;\">2.6384</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.8.8.7\"><span class=\"ltx_text\" id=\"S7.T3.8.8.7.1\" style=\"font-size:90%;\">0.9586</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T3.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T3.9.9.1\">\n<span class=\"ltx_text\" id=\"S7.T3.9.9.1.1\" style=\"font-size:90%;\">Ours </span><span class=\"ltx_text\" id=\"S7.T3.9.9.1.2\" style=\"font-size:90%;\"> = 0.10</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.9.9.2\"><span class=\"ltx_text\" id=\"S7.T3.9.9.2.1\" style=\"font-size:90%;\">47.3097</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.9.9.3\"><span class=\"ltx_text\" id=\"S7.T3.9.9.3.1\" style=\"font-size:90%;\">3.5037</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.9.9.4\"><span class=\"ltx_text\" id=\"S7.T3.9.9.4.1\" style=\"font-size:90%;\">0.9441</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.9.9.5\"><span class=\"ltx_text\" id=\"S7.T3.9.9.5.1\" style=\"font-size:90%;\">0.0555</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.9.9.6\"><span class=\"ltx_text\" id=\"S7.T3.9.9.6.1\" style=\"font-size:90%;\">11.0389</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T3.9.9.7\"><span class=\"ltx_text\" id=\"S7.T3.9.9.7.1\" style=\"font-size:90%;\">0.9935</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T3.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S7.T3.10.10.1\">\n<span class=\"ltx_text\" id=\"S7.T3.10.10.1.1\" style=\"font-size:90%;\">Ours </span><span class=\"ltx_text\" id=\"S7.T3.10.10.1.2\" style=\"font-size:90%;\"> = 0.30</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.10.10.2\"><span class=\"ltx_text\" id=\"S7.T3.10.10.2.1\" style=\"font-size:90%;\">51.5820</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.10.10.3\"><span class=\"ltx_text\" id=\"S7.T3.10.10.3.1\" style=\"font-size:90%;\">3.3105</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.10.10.4\"><span class=\"ltx_text\" id=\"S7.T3.10.10.4.1\" style=\"font-size:90%;\">0.9460</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.10.10.5\"><span class=\"ltx_text\" id=\"S7.T3.10.10.5.1\" style=\"font-size:90%;\">0.0549</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.10.10.6\"><span class=\"ltx_text\" id=\"S7.T3.10.10.6.1\" style=\"font-size:90%;\">12.5441</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T3.10.10.7\"><span class=\"ltx_text\" id=\"S7.T3.10.10.7.1\" style=\"font-size:90%;\">0.9946</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S7.T3.14.2.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S7.T3.12.1\" style=\"font-size:90%;\">Metrics Variation with the Regularization Parameter for the Oxford Flower Dataset ()</span></figcaption>\n</figure>",
102
+ "capture": "Table 3: Metrics Variation with the Regularization Parameter for the Oxford Flower Dataset ()"
103
+ },
104
+ "4": {
105
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S7.T4.10\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S7.T4.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S7.T4.6.6.7\"><span class=\"ltx_text\" id=\"S7.T4.6.6.7.1\" style=\"font-size:90%;\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T4.1.1.1\">\n<span class=\"ltx_text\" id=\"S7.T4.1.1.1.1\" style=\"font-size:90%;\">FID (</span><span class=\"ltx_text\" id=\"S7.T4.1.1.1.2\" style=\"font-size:90%;\">)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T4.2.2.2\">\n<span class=\"ltx_text\" id=\"S7.T4.2.2.2.1\" style=\"font-size:90%;\">IS (</span><span class=\"ltx_text\" id=\"S7.T4.2.2.2.2\" style=\"font-size:90%;\">)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T4.3.3.3\">\n<span class=\"ltx_text\" id=\"S7.T4.3.3.3.1\" style=\"font-size:90%;\">Precision (</span><span class=\"ltx_text\" id=\"S7.T4.3.3.3.2\" style=\"font-size:90%;\">)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T4.4.4.4\">\n<span class=\"ltx_text\" id=\"S7.T4.4.4.4.1\" style=\"font-size:90%;\">Recall (</span><span class=\"ltx_text\" id=\"S7.T4.4.4.4.2\" style=\"font-size:90%;\">)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T4.5.5.5\">\n<span class=\"ltx_text\" id=\"S7.T4.5.5.5.1\" style=\"font-size:90%;\">Density (</span><span class=\"ltx_text\" id=\"S7.T4.5.5.5.2\" style=\"font-size:90%;\">)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T4.6.6.6\">\n<span class=\"ltx_text\" id=\"S7.T4.6.6.6.1\" style=\"font-size:90%;\">Coverage (</span><span class=\"ltx_text\" id=\"S7.T4.6.6.6.2\" style=\"font-size:90%;\">)</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T4.10.11.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S7.T4.10.11.1.1\"><span class=\"ltx_text\" id=\"S7.T4.10.11.1.1.1\" style=\"font-size:90%;\">DDPM</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T4.10.11.1.2\"><span class=\"ltx_text\" id=\"S7.T4.10.11.1.2.1\" style=\"font-size:90%;\">34.0874</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T4.10.11.1.3\"><span class=\"ltx_text\" id=\"S7.T4.10.11.1.3.1\" style=\"font-size:90%;\">7.0827</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T4.10.11.1.4\"><span class=\"ltx_text\" id=\"S7.T4.10.11.1.4.1\" style=\"font-size:90%;\">0.8189</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T4.10.11.1.5\"><span class=\"ltx_text\" id=\"S7.T4.10.11.1.5.1\" style=\"font-size:90%;\">0.1522</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T4.10.11.1.6\"><span class=\"ltx_text\" id=\"S7.T4.10.11.1.6.1\" style=\"font-size:90%;\">6.7040</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T4.10.11.1.7\"><span class=\"ltx_text\" id=\"S7.T4.10.11.1.7.1\" style=\"font-size:90%;\">0.9996</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.7.7.1\">\n<span class=\"ltx_text\" id=\"S7.T4.7.7.1.1\" style=\"font-size:90%;\">Ours </span><span class=\"ltx_text\" id=\"S7.T4.7.7.1.2\" style=\"font-size:90%;\"> = 0.01</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.7.7.2\"><span class=\"ltx_text\" id=\"S7.T4.7.7.2.1\" style=\"font-size:90%;\">32.7278</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.7.7.3\"><span class=\"ltx_text\" id=\"S7.T4.7.7.3.1\" style=\"font-size:90%;\">7.5298</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.7.7.4\"><span class=\"ltx_text\" id=\"S7.T4.7.7.4.1\" style=\"font-size:90%;\">0.8805</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.7.7.5\"><span class=\"ltx_text\" id=\"S7.T4.7.7.5.1\" style=\"font-size:90%;\">0.1233</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.7.7.6\"><span class=\"ltx_text\" id=\"S7.T4.7.7.6.1\" style=\"font-size:90%;\">7.9743</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.7.7.7\"><span class=\"ltx_text\" id=\"S7.T4.7.7.7.1\" style=\"font-size:90%;\">0.9991</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.8.8.1\">\n<span class=\"ltx_text\" id=\"S7.T4.8.8.1.1\" style=\"font-size:90%;\">Ours </span><span class=\"ltx_text\" id=\"S7.T4.8.8.1.2\" style=\"font-size:90%;\"> = 0.05</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.8.8.2\"><span class=\"ltx_text\" id=\"S7.T4.8.8.2.1\" style=\"font-size:90%;\">32.4877</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.8.8.3\"><span class=\"ltx_text\" id=\"S7.T4.8.8.3.1\" style=\"font-size:90%;\">7.5156</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.8.8.4\"><span class=\"ltx_text\" id=\"S7.T4.8.8.4.1\" style=\"font-size:90%;\">0.8628</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.8.8.5\"><span class=\"ltx_text\" id=\"S7.T4.8.8.5.1\" style=\"font-size:90%;\">0.1263</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.8.8.6\"><span class=\"ltx_text\" id=\"S7.T4.8.8.6.1\" style=\"font-size:90%;\">8.0348</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.8.8.7\"><span class=\"ltx_text\" id=\"S7.T4.8.8.7.1\" style=\"font-size:90%;\">0.9997</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S7.T4.9.9.1\">\n<span class=\"ltx_text\" id=\"S7.T4.9.9.1.1\" style=\"font-size:90%;\">Ours </span><span class=\"ltx_text\" id=\"S7.T4.9.9.1.2\" style=\"font-size:90%;\"> = 0.10</span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.9.9.2\"><span class=\"ltx_text\" id=\"S7.T4.9.9.2.1\" style=\"font-size:90%;\">33.3405</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.9.9.3\"><span class=\"ltx_text\" id=\"S7.T4.9.9.3.1\" style=\"font-size:90%;\">7.4813</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.9.9.4\"><span class=\"ltx_text\" id=\"S7.T4.9.9.4.1\" style=\"font-size:90%;\">0.9103</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.9.9.5\"><span class=\"ltx_text\" id=\"S7.T4.9.9.5.1\" style=\"font-size:90%;\">0.1015</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.9.9.6\"><span class=\"ltx_text\" id=\"S7.T4.9.9.6.1\" style=\"font-size:90%;\">10.6351</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T4.9.9.7\"><span class=\"ltx_text\" id=\"S7.T4.9.9.7.1\" style=\"font-size:90%;\">1.0000</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T4.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S7.T4.10.10.1\">\n<span class=\"ltx_text\" id=\"S7.T4.10.10.1.1\" style=\"font-size:90%;\">Ours </span><span class=\"ltx_text\" id=\"S7.T4.10.10.1.2\" style=\"font-size:90%;\"> = 0.30</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T4.10.10.2\"><span class=\"ltx_text\" id=\"S7.T4.10.10.2.1\" style=\"font-size:90%;\">31.8998</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T4.10.10.3\"><span class=\"ltx_text\" id=\"S7.T4.10.10.3.1\" style=\"font-size:90%;\">7.5313</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T4.10.10.4\"><span class=\"ltx_text\" id=\"S7.T4.10.10.4.1\" style=\"font-size:90%;\">0.9542</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T4.10.10.5\"><span class=\"ltx_text\" id=\"S7.T4.10.10.5.1\" style=\"font-size:90%;\">0.0633</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T4.10.10.6\"><span class=\"ltx_text\" id=\"S7.T4.10.10.6.1\" style=\"font-size:90%;\">15.7780</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T4.10.10.7\"><span class=\"ltx_text\" id=\"S7.T4.10.10.7.1\" style=\"font-size:90%;\">0.9999</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S7.T4.14.2.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S7.T4.12.1\" style=\"font-size:90%;\">Metrics Variation with the Regularization Parameter for the Oxford-IIIT-Pet Dataset ()</span></figcaption>\n</figure>",
106
+ "capture": "Table 4: Metrics Variation with the Regularization Parameter for the Oxford-IIIT-Pet Dataset ()"
107
+ }
108
+ },
109
+ "image_paths": {
110
+ "1": {
111
+ "figure_path": "2403.16790v2_figure_1.png",
112
+ "caption": "Figure 1: Comparison of the generated images via the DDPM (left) and Iso-Diffusion (right). The DDPM generated images contain much more artefacts and do not seem realistic. However, the generated images via Iso-Diffusion are much more realistic and thus, they are of high fidelity.",
113
+ "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/ImageDatasets/ImageGeneration_01.png"
114
+ },
115
+ "2": {
116
+ "figure_path": "2403.16790v2_figure_2.png",
117
+ "caption": "Figure 2: Variation of isotropy of the data distribution along the forward diffusion process for the 2D synthetic datasets. As can be seen from the plot, in the limit, the data distribution reaches the value of two, which happens to be the dimension of an isotropic random vector in \u211d2superscript\u211d2\\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT (expected squared norm of an isotropic random vector).",
118
+ "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/IsotropyBoundary/IsotropyvsTimeStepsV6.png"
119
+ },
120
+ "3": {
121
+ "figure_path": "2403.16790v2_figure_3.png",
122
+ "caption": "Figure 3: An example scenario for illustrating a situation where high Density and low Coverage is recorded. Generating samples in the neighborhoods of the highly dense regions over the outliers in the true manifold has resulted in a high Density and low Coverage.",
123
+ "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/PRDC/PRDCInterpretation05.png"
124
+ },
125
+ "4": {
126
+ "figure_path": "2403.16790v2_figure_4.png",
127
+ "caption": "Figure 4: Center Banana 2D synthetic dataset. (a) True distribution points, color-coded by k=5 nearest neighbor radius. (b) DDPM-generated points, color-coded by true manifold span per point. (c) Iso-Diffusion generated points, color-coded by true manifold span per point.",
128
+ "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/Results/DensityPlotCenterBananaV3.png"
129
+ },
130
+ "5": {
131
+ "figure_path": "2403.16790v2_figure_5.png",
132
+ "caption": "Figure 5: Scattered Moon 2D synthetic dataset. (a) True distribution points, color-coded by k=5 nearest neighbor radius. (b) DDPM-generated points, color-coded by true manifold span per point. (c) Iso-Diffusion generated points, color-coded by true manifold span per point.",
133
+ "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/Results/DensityPlotScatteredMoonV3.png"
134
+ },
135
+ "6": {
136
+ "figure_path": "2403.16790v2_figure_6.png",
137
+ "caption": "Figure 6: Swiss Roll 2D synthetic dataset. (a) True distribution points, color-coded by k=5 nearest neighbor radius. (b) DDPM-generated points, color-coded by true manifold span per point. (c) Iso-Diffusion generated points, color-coded by true manifold span per point.",
138
+ "url": "http://arxiv.org/html/2403.16790v2/extracted/6025847/images/Results/DensityPlotSwissRollV3.png"
139
+ }
140
+ },
141
+ "validation": true,
142
+ "references": [
143
+ {
144
+ "1": {
145
+ "title": "Diffusion models beat gans on image synthesis, 2021.",
146
+ "author": "Prafulla Dhariwal and Alex Nichol.",
147
+ "venue": null,
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "2": {
153
+ "title": "Don\u2019t drop your samples! coherence-aware training benefits conditional diffusion.",
154
+ "author": "Nicolas Dufour, Victor Besnier, Vicky Kalogeiton, and David Picard.",
155
+ "venue": "In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6264\u20136273, 2024.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "3": {
161
+ "title": "Generative adversarial networks.",
162
+ "author": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.",
163
+ "venue": "Communications of the ACM, 63(11):139\u2013144, 2020.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "4": {
169
+ "title": "Density-aware feature embedding for face clustering.",
170
+ "author": "Senhui Guo, Jing Xu, Dapeng Chen, Chao Zhang, Xiaogang Wang, and Rui Zhao.",
171
+ "venue": "In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6697\u20136705, 2020.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "5": {
177
+ "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium.",
178
+ "author": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter.",
179
+ "venue": "Advances in neural information processing systems, 30, 2017.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "6": {
185
+ "title": "Classifier-free diffusion guidance.",
186
+ "author": "Jonathan Ho and Tim Salimans.",
187
+ "venue": "arXiv preprint arXiv:2207.12598, 2022.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "7": {
193
+ "title": "Denoising diffusion probabilistic models.",
194
+ "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.",
195
+ "venue": "Advances in neural information processing systems, 33:6840\u20136851, 2020.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "8": {
201
+ "title": "Imagen video: High definition video generation with diffusion models.",
202
+ "author": "Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al.",
203
+ "venue": "arXiv preprint arXiv:2210.02303, 2022a.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "9": {
209
+ "title": "Cascaded diffusion models for high fidelity image generation.",
210
+ "author": "Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans.",
211
+ "venue": "The Journal of Machine Learning Research, 23(1):2249\u20132281, 2022b.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "10": {
217
+ "title": "Training-free content injection using h-space in diffusion models, 2024.",
218
+ "author": "Jaeseok Jeong, Mingi Kwon, and Youngjung Uh.",
219
+ "venue": null,
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "11": {
225
+ "title": "Auto-encoding variational bayes.",
226
+ "author": "Diederik P Kingma and Max Welling.",
227
+ "venue": "arXiv preprint arXiv:1312.6114, 2013.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "12": {
233
+ "title": "Cifar-10 (canadian institute for advanced research).",
234
+ "author": "Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton.",
235
+ "venue": null,
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "13": {
241
+ "title": "Improved precision and recall metric for assessing generative models.",
242
+ "author": "Tuomas Kynk\u00e4\u00e4nniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila.",
243
+ "venue": "Advances in Neural Information Processing Systems, 32, 2019.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "14": {
249
+ "title": "Fast diffusion em: a diffusion model for blind inverse problems with application to deconvolution, 2023.",
250
+ "author": "Charles Laroche, Andr\u00e9s Almansa, and Eva Coupete.",
251
+ "venue": null,
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "15": {
257
+ "title": "Fast training of diffusion transformer with extreme masking for 3d point clouds generation.",
258
+ "author": "Shentong Mo, Enze Xie, Yue Wu, Junsong Chen, Matthias Nie\u00dfner, and Zhenguo Li.",
259
+ "venue": "2023.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "16": {
265
+ "title": "Reliable fidelity and diversity metrics for generative models.",
266
+ "author": "Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, and Jaejun Yoo.",
267
+ "venue": "2020.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "17": {
273
+ "title": "Improved denoising diffusion probabilistic models, 2021a.",
274
+ "author": "Alex Nichol and Prafulla Dhariwal.",
275
+ "venue": null,
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "18": {
281
+ "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models.",
282
+ "author": "Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen.",
283
+ "venue": "arXiv preprint arXiv:2112.10741, 2021.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "19": {
289
+ "title": "Improved denoising diffusion probabilistic models.",
290
+ "author": "Alexander Quinn Nichol and Prafulla Dhariwal.",
291
+ "venue": "In International Conference on Machine Learning, pages 8162\u20138171. PMLR, 2021b.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "20": {
297
+ "title": "Automated flower classification over a large number of classes.",
298
+ "author": "Maria-Elena Nilsback and Andrew Zisserman.",
299
+ "venue": "In 2008 Sixth Indian conference on computer vision, graphics & image processing, pages 722\u2013729. IEEE, 2008.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "21": {
305
+ "title": "Cats and dogs.",
306
+ "author": "Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar.",
307
+ "venue": "In IEEE Conference on Computer Vision and Pattern Recognition, 2012.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "22": {
313
+ "title": "Hierarchical text-conditional image generation with clip latents.",
314
+ "author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.",
315
+ "venue": "arXiv preprint arXiv:2204.06125, 1(2):3, 2022.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "23": {
321
+ "title": "Variational inference with normalizing flows.",
322
+ "author": "Danilo Rezende and Shakir Mohamed.",
323
+ "venue": "In International conference on machine learning, pages 1530\u20131538. PMLR, 2015.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "24": {
329
+ "title": "High-resolution image synthesis with latent diffusion models.",
330
+ "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.",
331
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684\u201310695, 2022.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "25": {
337
+ "title": "Concon-chi: Concept-context chimera benchmark for personalized vision-language tasks.",
338
+ "author": "Andrea Rosasco, Stefano Berti, Giulia Pasquale, Damiano Malafronte, Shogo Sato, Hiroyuki Segawa, Tetsugo Inada, and Lorenzo Natale.",
339
+ "venue": "In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 22239\u201322248, 2024.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "26": {
345
+ "title": "Photorealistic text-to-image diffusion models with deep language understanding.",
346
+ "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al.",
347
+ "venue": "Advances in Neural Information Processing Systems, 35:36479\u201336494, 2022.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "27": {
353
+ "title": "Assessing generative models via precision and recall, 2018.",
354
+ "author": "Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly.",
355
+ "venue": null,
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "28": {
361
+ "title": "Improved techniques for training gans.",
362
+ "author": "Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.",
363
+ "venue": "Advances in neural information processing systems, 29, 2016.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "29": {
369
+ "title": "Singan: Learning a generative model from a single natural image.",
370
+ "author": "Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli.",
371
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "30": {
377
+ "title": "Synthprov: Interpretable framework for profiling identity leakage.",
378
+ "author": "Jaisidh Singh, Harshil Bhatia, Mayank Vatsa, Richa Singh, and Aparna Bharati.",
379
+ "venue": "In 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 4734\u20134744, 2024.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "31": {
385
+ "title": "Deep unsupervised learning using nonequilibrium thermodynamics.",
386
+ "author": "Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli.",
387
+ "venue": "In International conference on machine learning, pages 2256\u20132265. PMLR, 2015.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "32": {
393
+ "title": "Generative modeling by estimating gradients of the data distribution.",
394
+ "author": "Yang Song and Stefano Ermon.",
395
+ "venue": "Advances in neural information processing systems, 32, 2019.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "33": {
401
+ "title": "Solving inverse problems in medical imaging with score-based generative models.",
402
+ "author": "Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon.",
403
+ "venue": "arXiv preprint arXiv:2111.08005, 2021.",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "34": {
409
+ "title": "High-dimensional probability: An introduction with applications in data science.",
410
+ "author": "Roman Vershynin.",
411
+ "venue": "Cambridge university press, 2018.",
412
+ "url": null
413
+ }
414
+ }
415
+ ],
416
+ "url": "http://arxiv.org/html/2403.16790v2"
417
+ }
20241127/2404.00345v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2404.05779v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2404.08402v2.json ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Galois Self-dual 2-quasi Constacyclic Codes over Finite Fields",
3
+ "abstract": "Let be a field with cardinality and , and\n.\nExtending Euclidean and Hermitian inner products,\nFan and Zhang introduced Galois -inner product\n(DCC, vol.84, pp.473-492).\nIn this paper, we characterize the structure\nof -quasi -constacyclic codes over ;\nand exhibit necessary and sufficient conditions\nfor -quasi -constacyclic codes being Galois -self-dual.\nWith the help of a technique developed in this paper,\nwe prove that, when is even,\nthe Hermitian self-dual -quasi -constacyclic codes\nare asymptotically good if and only if .\nAnd, when ,\nthe Euclidean self-dual -quasi -constacyclic codes\nare asymptotically good if and only if .",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Let be a finite field with cardinality \nwhere is a prime and is a positive integer,\nlet .\nAny \nis called a word over of length , where is a positive integer.\nThe Hamming weight is defined as the number\nof the indexes with .\nThe Hamming distance between two words \nis defined as .\nAny nonempty subset is called a code of length over ,\nand the words in the code are called codewords.\nThe minimum distance\n.\nFor any linear subspace of , called a linear code,\nthe minimum weight\n;\nand it is known that .\nThe fraction \nis called the relative minimum distance of ,\nand is called the rate of .\nA code sequence is said to be asymptotically good\nif the length of goes to infinity and there is a real number \nsuch that and \nfor .\nA class of codes is said to be asymptotically good if\nthere is an asymptotically good sequence of codes in the class;\notherwise, we say that the class of codes is asymptotically bad.\nA linear code of is called a cyclic code\nif is invariant under the cyclic permutation on items, i.e.,\nA linear code of is called a quasi-cyclic code of index ,\nabbreviated as -quasi-cyclic code,\nif is invariant under the double cyclic permutation on items, i.e.,\nThe Euclidean inner product of words\n of \nis defined to be .\nFor a code , the\n\nis the dual code of .\nA code is said to be self-dual if .\nObviously, the rate if is self-dual.\nCyclic codes are investigated extensively in theory and practice, cf. [18 ###reference_b18###].\nIt is still an open question (cf. [26 ###reference_b26###]):\nare cyclic codes over asymptotically good?\nHowever, it is well-known long ago that\nthe binary -quasi-cyclic codes are asymptotically good, see [6 ###reference_b6###, 7 ###reference_b7###, 20 ###reference_b20###].\nLater, Mart\u00ednez-P\u00e9rez and Willems [27 ###reference_b27###] proved the\nasymptotic goodness of binary self-dual -quasi-cyclic codes.\nAnd, [1 ###reference_b1###], [22 ###reference_b22###] and [23 ###reference_b23###] proved\nthat, if ,\nthe -ary self-dual -quasi-cyclic codes are asymptotically good.\nNote that \u201c\u201d is a necessary and sufficient\ncondition for the existence of -ary self-dual -quasi-cyclic codes,\ncf. [24 ###reference_b24###, Theorem 6.1].\nThe proof in [1 ###reference_b1###] is based on Artin\u2019s primitive root conjecture.\nThe arguments in [22 ###reference_b22###] and [23 ###reference_b23###] are self-contained.\nAnd the asymptotic goodness of any -ary -quasi-cyclic codes\nwere also proved in [22 ###reference_b22###].\nCyclic codes and -quasi-cyclic codes had been extended widely.\nLet .\nA linear code of is called a -constacyclic code\nif is invariant under the -constacyclic permutation on items, i.e.,\nIf , the -constacyclic codes are called\nnegacyclic codes. Further,\na linear code of is called\na -quasi -constacyclic code if\n is invariant under the double -constacyclic permutation on items, i.e.,\nIf is odd and ,\nthe self-dual -quasi negacyclic codes over are proved asymptotically good\nin [2 ###reference_b2###].\nWhile in [28 ###reference_b28###] for \nit is shown, based on Artin\u2019s primitive root conjecture,\nthat the -ary self-dual -quasi negacyclic codes are asymptotically good.\nRecently, for any and any \nthe -quasi -constacyclic codes over are proved\nasymptotically good, see [12 ###reference_b12###, Corollary I.3].\nAbout the self-dualities, in the semisimple case (i.e., ),\nthe self-dual cyclic codes over does not exist.\nLeon et al. [21 ###reference_b21###] and many references, e.g. [8 ###reference_b8###, 25 ###reference_b25###, 29 ###reference_b29###],\ndevoted to the study on various generalizations,\ne.g., duadic codes, extended self-dual cyclic codes, etc.\nOn the other hand,\nDinh and Lopez-Permouth [10 ###reference_b10###], Dinh [9 ###reference_b9###]\nstudied -constacyclic codes, and showed that in the semisimple case\nthe self-dual -constacyclic codes exist only if .\nExtending the Euclidean inner product and the Hermitian inner product,\nFan and Zhang [16 ###reference_b16###] introduced the so-called\nGalois inner products. Recall that . For ,\nthe map , ,\nis a Galois automorphism of , which induces an automorphism\n, .\nThe following\nis called the Galois -inner product on .\nAnd for any code , the following code\nis called the Galois -dual code of .\nThe code is said to be Galois -self-dual\n(or Galois self-dual when is known from context) if .\nIt is also obvious that if is Galois -self-dual.\nWhen ( when is even, respectively),\n is just the Euclidean\n(Hermitian, respectively) inner product, and\n is the Euclidean (Hermitian, respectively) dual code,\nand Galois -self-dual codes are just the Euclidean self-dual\n(Hermitian self-dual, respectively) codes.\nThe existence and the structure\nof Galois -self-dual -constacyclic codes are studied in [16 ###reference_b16###].\nIn this paper we study the Galois -self-dual -quasi -constacyclic codes\nover and their asymptotic properties.\nThe main contributions of this paper are the following.\nWe characterize the algebraic structure of\nthe -quasi -constacyclic codes\nand their Galois -dual codes.\nWe find that the Galois -self-dual -quasi -constacyclic codes\nbehave very differently depending on whether or not.\nIn both the cases we obtain necessary and sufficient conditions\nfor -quasi -constacyclic codes being Galois -self-dual.\nWe obtain that, if ,\nthen the Galois -self-dual\n-quasi -constacyclic codes are asymptotically bad.\nOn the other hand,\nif is even and , then\nthe Hermitian self-dual -quasi -constacyclic codes\nare asymptotically good.\nAnd, if and ,\nthen the Euclidean self-dual -quasi -constacyclic codes\nare asymptotically good.\nFor the Euclidean case, we note that\nthe asymptotic goodness of the self-dual -quasi-cyclic codes\nhas been proved in [23 ###reference_b23###];\non the other hand, for the asymptotic properties of the\nself-dual -quasi negacyclic codes, our result and the results in\n[2 ###reference_b2###, 28 ###reference_b28###], the three results do not cover each other.\nAs for methodology, the so-called reciprocal polynomial\nis a powerful tool for studying the duality property of\n-constacyclic and -quasi -consta-cyclic codes,\ne.g., in [2 ###reference_b2###, 28 ###reference_b28###].\nIt is revised in [23 ###reference_b23###] etc. to the \u201cbar\u201d map of\nthe quotient ring ,\nwhere denotes the polynomial ring over and \ndenotes the ideal generated by ; cf. [17 ###reference_b17###, Remark 6.6(2)].\nFor any matrix over ,\nthe Galois -transpose of is defined to be\n, cf. Eq.(3.3 ###reference_###) below.\nWith the operator \u201c\u201d on matrices,\nwe introduce an operator \u201c\u201d on the quotient ring\n, ,\ncf. Lemma 4.8 ###reference_theorem8### below for details.\nThat operator becomes an useful technique\nfor studying the Galois duality property of -constacyclic codes.\nThat is a methodological innovation of the paper.\nIn Section 2 ###reference_###, some preliminaries are sketched.\nIn Section 3 ###reference_###\nwe characterize the algebraic structure of the\n-quasi -consta-cyclic codes over the finite field \nand their Galois -dual codes.\nIn Section 4 ###reference_### we study the\nGalois -self-dual -quasi -constacyclic codes over .\nOur discussion divide into two cases: or . In both the cases,\nwe exhibit the necessary and sufficient conditions for a\n-quasi -constacyclic code being Galois -self-dual.\nAnd we show that if ,\nthen Galois -self-dual -quasi -constacyclic codes\nare asymptotically bad.\nIn Section 5 ###reference_###,\nthe Hermitian self-dual -quasi-cyclic codes over are proved asymptotically good.\nIn Section 6 ###reference_###,\nassuming that is even, we prove that\nthe Hermitian self-dual -quasi -constacyclic codes over \nare asymptotically good if and only if .\nAnd, assuming that ,\nwe show that the Euclidean self-dual -quasi -constacyclic codes over \nare asymptotically good if and only if .\nFinally, we end this paper by a conclusion in Section 7 ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": "In this paper, is always a finite field of cardinality \n(by we denote the cardinality of any set ),\nwhere is a prime and is a positive integer;\nand is an integer such that ; and is an integer.\nAny ring in this paper has identity (or denoted by for short);\nand ring homomorphisms and subrings are identity preserving.\nBy we denote the multiplication group consisting\nof all units (invertible elements) of .\nIn particular, .\nIf a ring is also an -vector space,\nthen is said to be an -algebra.\nIn that case, , ,\nis an embedding, so that we write that .\nLet and be -algebras.\nA map is called an -algebra homomorphism\nif it is both a ring homomorphism and an -linear map,\ni.e., and for any and any ,\nwhere the first two mean that is a ring homomorphism,\nand the last two mean that it is a linear map.\nRecall that for , the map ,\n for\n, is a Galois automorphism of the field .\nIf is bijective and satisfies that\nfor any and any ,\nthen is called a -algebra isomorphism,\nor -isomorphism for short.\nNote that the last two equalities of Eq.(2.1 ###reference_###)\nmean that is a -linear map.\nIn particular, if ,\nthen is called a -algebra automorphism,\nor a -automorphism for short.\nAnd, if is a -isomorphism,\nthen for any ideal of , the image is an ideal of and\n.\nThere are two typical examples as follows.\n(1). For any polynomial ,\nwe denote .\nThe Galois automorphism , ,\ninduces the map\nwhich is a -automorphism of .\nLet denote the order of . It is easy to check that\n.\n(2). For the matrix algebra\n\nconsisting of all matrices over ,\nthe map\nis a -automorphism of .\nLet .\nFor a positive integer , we write to denote the identity matrix of degree .\nLet denote\nthe -constacyclic permutation matrix of degree as follows\nIn particular, if then\n\nis the cyclic permutation matrix. By matrix multiplication,\nfor we have that\nwhich is the vector obtained by -constacyclically permuting\nthe items of the vector ; and that\nfor and\n,\nThus, we get another description of the -constacyclic codes\n(cf. Eq.(1.1 ###reference_###)) and\nthe -quasi -constacyclic codes (cf. Eq.(1.4 ###reference_###))\nas follows.\n(1) Let be a subspace of .\nThen is a -constacyclic code if and only if\n, for any .\n(2) Let be a subspace of .\nThen is a -quasi -constacyclic code\nif and only if\n, for any . \u220e\nIn the following we always denote\n,\nwhich is the quotient algebra of the polynomial algebra over the ideal\n generated by .\nAny residue class modulo has a unique representative\npolynomial with degree less than . Hence we can write\nFurther, the Cartesian product\nis an -module.\nFor and ,\nthe following identifications and results will be quoted later in this article.\n(1). There is a canonical linear isomorphism\nwhere \nand . It is easy to check that\nThen any element of \nis identified with the word of ;\nand by Lemma 2.2 ###reference_theorem2###(1),\nthe -constacyclic codes of length \nare identified with the ideals (-submodules) of .\n(2). For the -module ,\nwe have the following canonical linear isomorphism\nwhere ,\n,\nand .\nFor ,\nThen any element \nis identified with the word ,\nand by Lemma 2.2 ###reference_theorem2###(2)\nthe -quasi -constacyclic codes of length are identified\nwith the -submodules of .\nIf , then the algebra is semisimple, cf. [4 ###reference_b4###]; and\nby Ring Theory (e.g., cf. [19 ###reference_b19###] or [17 ###reference_b17###, Remark 2.4]),\nwe have the following two.\n(1) For any ideal (-submodule) of , there is an idempotent of such that\n and where .\nNote that is an algebra with identity (but not a subalgebra of \nin general because in general); in particular, makes sense.\nMoreover, if with being an idempotent and\nan ideal , then \nwith .\n(2) If and are -submodules of ; and\n is an -module isomorphism,\nthen , and there is a \nsuch that\n for any .\nIf , there is another identification.\nBy Eq.(2.7 ###reference_###), we denote\nLet \nbe the cyclic group of order .\nLet be the cyclic group algebra, i.e.,\n\nis an -vector space with basis and equipped\nwith the multiplication induced by the multiplication of the group as follows:\nThere is a canonical algebra isomorphism:\nThus, is identified with the cyclic group algebra\n.\nAnd by Remark 2.3 ###reference_theorem3###(1), the cyclic codes over of length \nare identified with the ideals of the cyclic group algebra .\nSimilarly, -quasi-cyclic codes over of length \nare identified with the -submodules of\nthe -module .\nWith the identifications Eq.(2.14 ###reference_###),\nwe have more algebraic preliminaries about to introduce.\nAssume that , then is semisimple.\nLet\nbe all primitive idempotents of . Correspondingly, the irreducible decomposition of in is as follows\nsuch that\nSince is irreducible over ,\neach is an extension field over with identity , and\n.\nAs ,\n; so\nFor , we denote\nIn general, in this paper we consider for any and \n(not restricted to the semisimple case)\nunless the hypothesis \u201c\u201d is explicitly assumed.\nOnce \u201c\u201d is assumed, the above preliminaries on the\nsemisimple case can be quoted.\nAs mentioned in Introduction,\nif then the\nself-dual -quasi-cyclic codes are asymptotically good.\nTo state it more precisely, we need the so-called -entropy function:\nwhich value strictly increases from to \nwhile increases from to .\nBy [23 ###reference_b23###, Theorem IV.17], for any real number with\n and ,\nthere are self-dual -quasi-cyclic codes over \n(hence ) such that:\n(1) the relative minimum distance \u2009 for ; and\n(2) the code length of satisfies that every is odd and coprime to ,\nand \n(in particular, ).\nLet with being the index set.\nIf there are positive integers and subsets (repetition is allowed)\n of with for \nsatisfying the following two conditions:\n(1) for each ()\nthe projection : ,\n,\nmaps bijectively onto ;\nand (2) for any ()\nthe number of the subsets which contains (i.e., ) equals ;\nthen we say that is a balanced code over of length ,\nand are called information index sets of the code .\nAn important result (see [13 ###reference_b13###, Corollary 3.4]) is that:\nif is a balance code with cardinality , then\nwhere is defined in Eq.(2.22 ###reference_###) and\nConstacyclic codes are balanced, see [12 ###reference_b12###, Lemma II.8].\nBy [13 ###reference_b13###, Corollary 3.4 and Corollary 3.5],\nwe can easily obtain the following lemma.\nIf is an ideal of , then the -submodule of is\na balanced code, hence for ,"
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "-quasi constacyclic codes over finite fields",
21
+ "text": "In this sections we are primarily concerned with the algebra properties of\n-quasi -constacyclic codes over .\nIn the following, we always assume that\nwhere denotes\nthe order of in the multiplication group .\nAs remarked in Remark 2.5 ###reference_theorem5###,\nmost of this section discusses for any and ,\nonly Theorem 3.9 ###reference_theorem9### and its corollaries consider\nthe semisimple case (i.e., ).\nFor , there are two projections , :\n as follows\nBy the linear isomorphism Eq.(2.11 ###reference_###),\nthe projections and \nare also defined on :\nfor ,\nFor any -submodule of ,\nrestricting to , we have an -homomorphism\n.\nObserve that the kernel of the restricted homomorphism is\nIt is known that for ,\nif a linear code is both\n-constacyclic and -constacyclic,\nthen either or (cf. [9 ###reference_b9###]).\nExtending the result to , we get the following lemma.\nLet such that .\nIf a subspace of is\nboth a -quasi -constacyclic code and\na -quasi -constacyclic code, then either or ;\nand it is the same for .\nSuppose . There is a codeword\nwith some , ; so we can assume that .\nBy the double -constacyclic permutation in Eq.(1.4 ###reference_###),\nthen contains the following word\nfor some , .\nUsing the double -constacyclic permutation times, we see that\n contains such words\nfor some , .\nIt follows that contains a basis of , hence .\nBy the same argument, either or .\n\u220e\nFor any matrix with ,\nwe denote \n(cf. Example 2.1 ###reference_theorem1###(2)).\nBy we denote the transpose of .\nLet\nbe the transpose of the matrix .\nWe call the Galois -transpose of .\nIf , is just the transpose matrix of .\nIf is even and , is the Hermitian transpose\nof .\nFor , and ,\nit is easy to check that\nHence, if , the operator \u201c\u201d is a -anti-automorphism\nof the matrix algebra \n(compare it with Eq(2.1 ###reference_###) and Example 2.1 ###reference_theorem1###(2)).\nWith the identification Eq.(2.9 ###reference_###) and the operator \u201c\u201d,\nwe can compute the Galois -inner product on \n(see Eq.(1.5 ###reference_###))\nin a matrix version:\nAnd for any -constacyclic code (i.e., any idea of ),\nthe Galois -dual code of is as follows:\nSimilarly, with the identification Eq.(2.11 ###reference_###)\nthe Galois -inner product on is computed in a matrix version:\nAnd for any -quasi -constacyclic code (-submodule of ),\nthe Galois -dual code of is\nIf , then we say that is Galois -self-dual,\nor Galois self-dual.\nIf is a -quasi -constacyclic code,\nthen is a -quasi -constacyclic code.\nIn particular, if ,\n is a -quasi -constacyclic code.\nAssume that .\nBy Lemma 2.2 ###reference_theorem2###(2), it is enough to prove that\nfor any \nwe have\nApplying Eq.(3.4 ###reference_###) and Eq.(3.6 ###reference_###) yields\nBecause ,\nit is easy to check that\nSince and \n(see Eq(3.1 ###reference_###)),\nwe have ,\nhence .\nSo\nBy Lemma 2.2 ###reference_theorem2###(2),\n. We get that\nWe are done.\n\u220e\nWe note that\n if and only if , since\nIf , then\nfor any -quasi -constacyclic code ,\nits Galois -dual code is\nstill a -quasi -constacyclic code.\nOtherwise (i.e., ,\nLemma 3.2 ###reference_theorem2### implies that,\nfor many -quasi -constacyclic codes,\ntheir Galois -dual codes are no longer -quasi -constacyclic codes.\nFor ,\nby we denote the ideal\nof generated by .\nSimilarly, for ,\nby we denote the -submodule of \ngenerated by .\nNote that any ideal of is generated by one element\n(cf. Remark 2.4 ###reference_theorem4###(1) for semisimple case,\nand cf. [11 ###reference_b11###, Lemma 4.3] for general case).\nHowever, some -submodules of \ncan not be generated by one element. For example, as an -submodule\n can not be generated by one element\n(because any -module generated by one element\nis a quotient of the regular module).\nFor any \nwith ,\nwe have an matrix\nwhose first row is the vector ,\nand each next row is obtained\nby -constacyclically permuting the present row (cf. Eq.(2.5 ###reference_###)).\nWe call \nthe -consta circulant matrix\nassociated with the polynomial .\nLet . Then we have:\n(1) is linearly generated by the rows of the \nmatrix .\n(2) is linearly generated by the rows of the \nmatrix .\n(1). Let be the ideal of generated by , i.e.,\nObviously,\n.\nSo is the subspace of linearly generated by\n.\nEq.(2.10 ###reference_###) and Eq.(3.8 ###reference_###) imply that\n is identified with the row vector \nwhich is just the first row of the matrix ;\nfor ,\n is identified with the row vector ,\nwhich is just the \u2019th row of the matrix .\nTherefore, is linearly generated by the rows of the matrix .\nObviously, (2) is proved in a similar way.\n\u220e\nFor , the -quasi -constacyclic code\n has a generating matrix .\nBy Lemma 3.7 ###reference_theorem7###, the rows of the matrix\n linearly generate the code .\nAnd the rows of the matrix \nare linearly independent.\n\u220e\nIn the rest of this section, we turn to the semisimple case, i.e., .\nExtending [17 ###reference_b17###, Theorem 3.2] which characterized the algebraic structure\nof -quasi-cyclic codes in the semisimple case, we characterize the algebraic structure of -quasi constacyclic\ncodes as follows.\nAssume that .\nIf is an -submodule of ,\nthen there are ideals of \nsatisfying that \nand an element such that\nConversely, if there are ideals of \nwith and an element\n, then in Eq.(3.9 ###reference_###)\nis an -submodule of .\nThe \u201cconversely\u201d part is obviously true because\nboth and \nare -submodules of , and\n.\nAssume that is an -submodule of ,\nand , \nare defined in Remark 3.1 ###reference_theorem1###.\nWe consult the module version of Goursat Lemma\n(see[17 ###reference_b17###, Remark 3.1]). Take\nThen are ideals of , .\nFor any , there is a unique \nsuch that , hence we have the map\nwhich is an -isomorphism, and\nSince is semisimple and\n are ideals of ,\nthere is an ideal of such that\n; see Remark 2.4 ###reference_theorem4###(1).\nSimilarly, we have an ideal of such that\n.\nThen and .\nThe -isomorphism in Eq.(3.15 ###reference_###) induces an\n-isomorphism such that\n for all .\nThus the image , and there is a \nsuch that for all ; cf. Remark 2.4 ###reference_theorem4###(2).\nIn conclusion,\nwe have an ideal of such that\n and a such that\nObviously, .\nThus Eq.(3.9 ###reference_###) holds.\n\u220e\nAssume that . Then for ,\n if and only if .\nIf then .\nConversely, assume that .\nNote that for an idempotent \nwhich is the identity of the ring \n(cf. Remark 2.4 ###reference_theorem4###(1)).\nThere is an element such that .\nFor any element , we can write with .\nThen . Thus .\n\u220e\nKeep the notation in Theorem 3.9 ###reference_theorem9###\n(in particular, ).\nAny -submodule of can be\nwritten as\nwhere and satisfy that\n.\nTake\n, and\n, hence .\nBy the above Lemma, we get Eq.(3.17 ###reference_###) immediately.\n\u220e"
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Galois self-dual -quasi constacyclic codes",
27
+ "text": "In this section\nwe investigate the Galois -self-dual\n-quasi -constacyclic codes over .\nBy Lemma 3.3 ###reference_theorem3### and Remark 3.4 ###reference_theorem4###,\nany Galois -self-dual -quasi -constacyclic code\n(i.e., ) is also -constacyclic.\nWe study them in two cases:\n (i.e., ),\nor (i.e., ).\nStill, this section discusses for any and \nexcept for Theorem 4.10 ###reference_theorem10### which considers\nthe semisimple case (i.e., )."
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "The case that",
33
+ "text": "Our concern in this subsection is\nthe Galois -self-dual 2-quasi constacyclic codes over \nunder the assumption that .\nAssume that \n(i.e., ).\nThe following three are equivalent to each other:\n(1) is a Galois -self-dual -quasi -constacyclic code\nover of length .\n(2)\n is a -submodule of \ngenerated by , where\n with , and\n are viewed as the constant polynomials of .\n(3)\n is an -linear code of length with a generating matrix\n, where \nwith .\nObserve that by Corollary 3.8 ###reference_theorem8###,\nthe statements (2) and (3) are equivalent. Suppose that (3) holds.\nThen\n\nwhich implies , cf. Eq.(3.6 ###reference_###).\nSince the rank of the generating matrix is ,\nwe have , and so the statement (1) follows.\nTherefore, it suffices to show that (1) implies (3).\nAssume that (1) holds, i.e., .\nBy Lemma 3.3 ###reference_theorem3###,\n is both a -quasi -constacyclic code\nand a -quasi -constacyclic code.\nSince ,\nwe deduce from Lemma 3.2 ###reference_theorem2### that\neither or .\nSuppose that .\nSince , by Eq.(3.2 ###reference_###),\nwe have that ,\nwhich is impossible because \nis not Galois -self-dual.\nSo, it must be the case that .\nBy the same argument, we have .\nAs , we can take \nwith .\nLet be the submodule\nof generated by .\nObviously, .\nBy Corollary 3.8 ###reference_theorem8###, has the generating matrix\nThe rank of the matrix is , hence . In a word,\n is linearly generated by the rows of\nthe matrix in Eq.(4.1 ###reference_###).\nBecause is also a -quasi -constacyclic code,\nby the same way we can also get that\n is linearly generated by the rows of the matrix\n.\nThus any row of is a linear combination of\nthe rows of the matrix in Eq.(4.1 ###reference_###).\nSo there is an matrix such that\nIt follows that , and so . The latter equality is as follows:\nSince ,\nit follows that for ;\nhence the polynomial is a constant polynomial: \nfor some ,\nand has a generating matrix .\nBecause is Galois -self-dual,\nhence . In conclusion, (3) holds.\n\u220e\nAssume that .\nThe Galois -self-dual -quasi -constacyclic codes\nof length exist\nif and only if the polynomial has roots in ;\nand in that case,\nthe -submodules of with\n being a root of are all the Galois -self-dual\n-quasi -constacyclic codes of length .\n\u220e\nNote that in the Euclidean case \u201c\u201d,\n if and only if .\nAssume that .\nThe self-dual -quasi -constacyclic codes over of length \nexist if and only if\n;\nand in that case,\nthe -submodules of with satisfying\n are the all self-dual -quasi\n-constacyclic codes over of length .\nThe polynomial has roots in if and only if\n is even or is odd and ,\nif and only if .\n\u220e\nAs a comparison, the Hermitian self-dual ones always exist.\nAssume that is even and .\nThen the Hermitian self-dual -quasi -constacyclic codes over \nof length always exist;\nand in that case, the -submodules of with\n satisfying are the all Hermitian self-dual\n-quasi -constacyclic codes over of length .\nIf is even,\nthen the polynomial \nalways has roots in .\nAssume that is odd. The order of the multiplication group\nand . So, there is a subgroup \nof the multiplication group with order .\nThen any generator of the group is a root of\nthe polynomial .\n\u220e\nAs a consequence, we get the following.\nAssume that (i.e., ).\nThen the Galois -self-dual -quasi -constacyclic codes over \nare asymptotically bad.\nIf the polynomial has no root in , then\nGalois -self-dual -quasi -constacyclic codes over do not\nexist, hence Galois -self-dual -quasi -constacyclic codes over \nare asymptotically bad.\nOtherwise, any Galois -self-dual -quasi -constacyclic code\n has minimum weight ,\nbecause any row of the generating matrix \nhas weight . The relative minimum distance\n while .\nSo Galois -self-dual -quasi -constacyclic codes over \nare asymptotically bad.\n\u220e"
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "The case that",
39
+ "text": "We start with a general result about the -consta circulant matrices\n(cf. Definition 3.6 ###reference_theorem6###).\nAs in Example 2.1 ###reference_theorem1###(2), denotes the \nmatrix algebra over .\nWe consider its subset consisting of\nall the -consta circulant matrices of degree :\nwhere is defined in Eq.(2.4 ###reference_###)\nand is defined in Eq.(3.8 ###reference_###).\nWith the notation as above.\n\nis a subalgebra of and\nthe following is an algebra isomorphism:\nThe following is obviously an -algebra homomorphism:\nSince ,\nthe kernel of the homomorphism is the ideal of generated by .\nNote that the quotient algebra , see Eq.(2.7 ###reference_###).\nBy Homomorphism Theorem, the homomorphism induces an injective homomorphism:\nThe image of this homomorphism is exactly\n,\ncf. Definition 3.6 ###reference_theorem6###.\nThus \nis a subalgebra of \nand the homomorphism induces\nthe algebra isomorphism in Eq.(4.2 ###reference_###).\n\u220e\nIn the rest of this subsection we assume that \ni.e., , cf. Remark 3.4 ###reference_theorem4###.\nBy Eq.(3.3 ###reference_###), we have\nthe following map\nand by Eq.(3.4 ###reference_###), for any and ,\nSo \nis a -anti-automorphism of the matrix algebra .\nKeep the notation as above.\nAssume that . Then ,\nand the restricted map\n\nas follows is a -automorphism of the -algebra\n:\nSince ,\nby Eq.(3.7 ###reference_###) we deduce that\nBy Eq.(4.2 ###reference_###),\nany \nis associated with ,\nwhere , so\nThus .\nRestricting the -anti-automorphism \nto , we\nget the -anti-automorphism Eq.(4.3 ###reference_###),\nwhich is in fact a -automorphism because\n\nis a commutative algebra.\n\u220e\nNext, we introduce an operator \u201c\u201d on ,\nwhich is the key to obtaining the necessary and sufficient conditions for\n-quasi -constacyclic codes being Galois -self-dual.\nWith the isomorphism in Eq.(4.2 ###reference_###),\ninspiring by Eq.(4.3 ###reference_###) and Eq.(4.4 ###reference_###),\nfor we define\nwhere , cf. Example 2.1 ###reference_theorem1###(1).\nAssume that . Let , and be as in Eq.(4.5 ###reference_###),\nEq.(4.2 ###reference_###) and Eq.(4.3 ###reference_###)\nrespectively.\nThen\nand the following map is a -automorphism of the algebra :\nFor ,\nby the definition of in Eq.(4.5 ###reference_###)\nand by Eq.(4.4 ###reference_###),\nThat is,\n. So\nEq.(4.6 ###reference_###) holds; equivalently, the following diagram is commutative:\nBecause is a -automorphism and both and \nare algebra isomorphisms, by Eq.(4.6 ###reference_###)\nwe see that Eq.(4.7 ###reference_###) is a -automorphism of .\n\u220e\nRecall that the -quasi -constacyclic code generated by \nhas been defined in Remark 3.5 ###reference_theorem5###.\nFor the -submodules of generated by one element,\nwe have the following Galois self-duality criteria.\nAssume that .\n(1) The -quasi -constacyclic code\n is Galois -self-dual if and only if\n and the rate .\n(2) The -quasi -constacyclic code \nis Galois -self-dual if and only if .\n(1) The is Galois -self-dual if and only if\n and .\nWhat remains is to show that\nObserve that by Lemma 3.7 ###reference_theorem7###,\n is linearly generated by the rows of the\nmatrix .\nThus if and only if\nThus, Eq.(4.12 ###reference_###) follows from Lemma 4.8 ###reference_theorem8###\n(cf. Eq.(4.11 ###reference_###)) immediately.\n(2) \nBy Corollary 3.8 ###reference_theorem8###, has the generating matrix\n.\nIn particular, ,\ni.e., .\nSimilarly to the proof of , we have\n if and only if .\n\u220e\nIn the semisimple case,\nextending [17 ###reference_b17###, Theorem 4.2], we have the following theorem\nto characterize the Galois self-dual -quasi -constacyclic codes.\nIf , by Corollary 3.11 ###reference_theorem11###\nany -submodule of can be written as\nwhere with and .\nAssume that and .\nLet in Eq.(4.13 ###reference_###) be any -quasi -constacyclic code.\nThen is Galois -self-dual if and only if\nthe following two hold:\n(1)\n(2) .\nThe is Galois -self-dual if and only if\n and .\nNote that the inner product is linear for the first\nvariable, and it is -linear for the second variable,\nbut it is not symmetric in general.\nSo is equivalent to the following\nThe first line holds obviously. By Eq.(4.12 ###reference_###),\nthe second and the third lines are equivalent to that\n and , respectively.\nThe last line is equivalent to that .\nTurn to the forth line which is equivalent to .\nNote that is a ring with identity which is an idempotent,\ncf. Remark 2.4 ###reference_theorem4###(1), hence\n\u201c\u201d implies that for a .\nSo . Similarly,\n.\nThe theorem is proved.\n\u220e"
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Hermitian self-dual -quasi-cyclic codes",
45
+ "text": "In this section, we always assume that is even and ,\nand .\nThe map , , is\na Galois automorphism of order , and\nis the Hermitian inner product on .\nIn this section we consider \nand , cf. Eq.(2.13 ###reference_###);\nand prove that the Hermitian self-dual -quasi-cyclic codes\nare asymptotically good."
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "The operator \u201c\u201d on",
51
+ "text": "Since ,\nthe results in Subsection 4.2 ###reference_###\ncan be quoted freely for and .\nIn particular, the operator \u201c\u201d in Lemma 4.8 ###reference_theorem8###\nis a -automorphism of :\nwhere and\n, i.e.,\n,\ncf. Eq.(4.5 ###reference_###).\nBy Eq.(2.14 ###reference_###), we have the identification:\nwhere is the cyclic group of order \nand is the cyclic group algebra.\nSo the symbol \u201c\u201d can be identified with the element of the group ,\nand we can write , and the expressions ,\n etc. make sense. Hence and\nThe -automorphism \u201c\u201d of in\nEq.(5.1 ###reference_###)\nis of order 2.\nSince , for any by Eq.(5.2 ###reference_###) we have\nThus the order of the operator \u201c\u201d equals or .\nThere is an such that .\nThen in we have\n.\nSo the order of the operator \u201c\u201d equals .\n\u220e\nIf is a non-zero ideal of which is invariant\nby the operator \u201c\u201d (i.e., ),\nthen the restriction of the operator \u201c\u201d to \ninduces a -automorphism of of order .\nBy Lemma 5.1 ###reference_theorem1###,\nthe restriction of \u201c\u201d to is of order or .\nLet with . If ,\nthen the restriction of \u201c\u201d to is not the identity.\nOtherwise, ; there is an \nwith ; so\n.\nIn conclusion, the restriction of \u201c\u201d to is of order .\n\u220e\nNote that \u201c\u201d is assumed in this section.\nRecall from Eq.(2.15 ###reference_###) that\n, \nare all primitive idempotents of .\nThen the -automorphism \u201c\u201d permutes the primitive idempotents,\ni.e., every is still a primitive idempotent.\nNote that . We can reorder the other primitive idempotents\nsuch that for ,\n but for .\n(1) For , we get . The restriction of the map \u201c\u201d in Eq.(5.1 ###reference_###)\nto induces a -automorphism of \nof order 2 (cf. Corollary 5.2 ###reference_theorem2###) as follows\nLet . By Eq.(2.19 ###reference_###),\nthe ideal is a field extension over with\ncardinality .\nSo for .\n(2) For , ,\nand the restriction of the map \u201c\u201d in Eq.(5.1 ###reference_###)\nto induces a -isomorphism as follows\nLet ,\nthen .\nDenote ,\nby Eq.(2.19 ###reference_###) we have that\nIt is easy to check that\n.\nSo .\nThe restriction of the map \u201c\u201d in Eq.(5.1 ###reference_###)\nto induces a\n-automorphism of of order 2\n(cf. Corollary 5.2 ###reference_theorem2###) as follows\nFor convenience, in the following we denote for .\nThen for , and\nThus, can be rewritten as\nAny is decomposed into\nThe is called the -component of .\nFor ,\nby Eq.(5.5 ###reference_###) it is trivial to check that\nKeep the notation in Remark 5.3 ###reference_theorem3###,\nwe have for ;\nand for . Thus,\nwhere and for ; cf. Eq.(2.21 ###reference_###)."
52
+ },
53
+ {
54
+ "section_id": "5.2",
55
+ "parent_section_id": "5",
56
+ "section_name": "A class of Hermitian self-dual -quasi-cyclic codes",
57
+ "text": "Recall that the -quasi-cyclic code of \n(defined in Remark 3.5 ###reference_theorem5###)\nis Hermitian self-dual if and only if , see Lemma 4.9 ###reference_theorem9###.\nSo we denote\nAny corresponds to a Hermitian self-dual\n-quasi-cyclic code .\nSet\nBy Eq.(5.5 ###reference_###), Eq.(5.7 ###reference_###) and Eq.(5.8 ###reference_###),\n(1)\nIf (i.e. ),\nthen .\n(2)\nIf (i.e. and ),\nthen .\n(1).\nBy Remark 5.3 ###reference_theorem3###(1), is the field with\n. For ,\n. So,\n if and only if\n.\nHence equals the number of the roots in \nof the -polynomial ,\nwhere denotes the finite field with cardinality .\nIf , then . The order of the multiplicative group\n is\nThus the multiplication group \nhas a subgroup of order ,\nand all elements of are roots of\nthe polynomial .\nHence .\nOtherwise is odd, then is even.\nBy Eq.(5.13 ###reference_###),\nwe get that .\nSo has a subgroup\n of order . The elements of are\njust all roots of the polynomial .\nSince\nall roots of the polynomial are inside .\nThus, .\n(2).\nFor , and .\nBy Remark 5.3 ###reference_theorem3###(2),\n.\nFor \nwith and ,\nby Eq.(5.4 ###reference_###) we have , and so\n.\nIt follows that: if and only if\n and . We take\n, then \n(where is the inverse of in , not in ),\nhence \nis uniquely determined. Thus,\nSince is a field with cardinality ,\n.\n\u220e\nFor any subset ,\nrefining the notation in Eq.(5.6 ###reference_###) and in Remark 5.3 ###reference_theorem3###,\nwe define an ideal of and an integer as follows\nObviously, can be written as a disjoint union:\nSimilarly to Eq.(5.9 ###reference_###), we have\nIt is known that ([23 ###reference_b23###, Lemma 4.7])\nfor integers ,\nif for ,\nthen\nFor a subset , if \n(where is defined in Eq.(2.21 ###reference_###)), then\nFor , we deduce that since .\nBy Lemma 5.4 ###reference_theorem4### we have that\nIf , using Eq.(5.18 ###reference_###) we get\nwhere the last equality follows by Eq.(5.15 ###reference_###);\nand\nThat is, if then\nIf , we set ,\nand so , where .\nBy Lemma 5.4 ###reference_theorem4###(1),\nwe see that , and\nIt follows by Eq.(5.19 ###reference_###) that\nand that\nThus the lemma holds.\n\u220e\nBy Eq.(5.12 ###reference_###), .\nWe have the following at once.\nIf \n(where is defined in Eq.(2.21 ###reference_###)), then\nAny element can be written as\n\n(cf. Eq.(5.7 ###reference_###)).\nWe denote\nObviously . For it is easy to check that\nbut the converse is not true in general.\nFor , we denote\nwhere is defined in Remark 3.5 ###reference_theorem5###\nand is defined in Eq.(5.10 ###reference_###).\n(1)\nIf , then , hence .\n(2) If , then\n(1). Assume that , then (cf. Eq.(5.10 ###reference_###)) and\n for an element .\nThe former equality implies that is invertible with .\nThe latter equality implies that and .\nWe deduce that since is invertible.\n(2). By Eq.(5.7 ###reference_###), we write\n, and\n with .\nBy Eq.(5.8 ###reference_###),\nWe count the number of such in two cases.\nCase 1: , i.e., .\nSince , we have that , hence .\nThen any satisfies that .\nCase 2: , i.e., . There are two subcases.\nSubcase 2.1: . Since is a field,\n is invertible in ,\nand so there is a unique such that\n. We see that there is at most one such that\n.\nSubcase 2.2: .\nBy Remark 5.3 ###reference_theorem3###(2),\n.\nWe can write and , where\n and .\nSince , at least one of and is nonzero.\nWe may assume that .\nTake as in Eq.(5.14 ###reference_###),\nthen implies that\n in . Hence is uniquely determined.\nIn a word, if (i.e., ), then there is at most one\n such that . Thus\nBy Lemma 5.5 ###reference_theorem5### and Corollary 5.6 ###reference_theorem6###,\nWe are done.\n\u220e"
58
+ },
59
+ {
60
+ "section_id": "5.3",
61
+ "parent_section_id": "5",
62
+ "section_name": "Hermitian self-dual -quasi-cyclic codes are asymptotically good",
63
+ "text": "Keep the notation in Subsection 5.2 ###reference_###.\nFrom now on, let be a real number satisfying that\n(where is the -entropy function defined in Eq.(2.22 ###reference_###))\nAnd set\nRecall that is defined in Eq.(2.21 ###reference_###),\n is defined in Eq.(2.23 ###reference_###) and\n is defined in Eq.(5.21 ###reference_###).\n.\nAssume that . By Eq.(5.25 ###reference_###),\nwe have an such that \nand , i.e., .\nFrom Lemma 5.7 ###reference_theorem7###(1) and Eq.(5.20 ###reference_###),\nwe deduce that ; hence\n, and\n and .\nIf , then ,\nhence (cf. Eq.(2.20 ###reference_###)),\nwhich contradicts that \nand (see Eq.(5.22 ###reference_###)).\nThus .\nBy the definition of in Eq.(2.21 ###reference_###),\nwe obtain that .\nSo the lemma is proved.\n\u220e\nIf \nand , then\nNote that any ideal of is a direct sum of some of\n (cf. Eq.(2.19 ###reference_###)).\nFor , the dimension ,\nwhere is defined in Eq.(2.21 ###reference_###).\nSuppose that \nwhere .\nIf is not a direct summand of , then the number of such \nis at most ;\notherwise is a direct summand of , then the number of such \nis at most .\nHence\nApplying Lemma 5.8 ###reference_theorem8### and Lemma 5.7 ###reference_theorem7###(2), we obtain\nwhere the number \nis independent of the choice of .\nBy Lemma 2.8 ###reference_theorem8###, for \nwe have .\nSo\nwhere the number \nis independent of the choice of .\nUsing Eq.(5.26 ###reference_###) yields\nNote that \n(since )\nand . We further get\nThat is,\n.\n\u220e\nBy [3 ###reference_b3###, Lemma 2.6] (or [14 ###reference_b14###, Lemma II.6]),\nthere are odd positive integers coprime to such that\n, where\n is defined in Eq.(2.21 ###reference_###).\nSince ,\nwe see that\nthere are odd positive integers coprime to such that\nwhich implies that , hence .\nObviously, we can assume that for .\nAssume that and \nas in Eq.(5.22 ###reference_###). Then there are\nHermitian self-dual -quasi-cyclic codes over \n(hence )\nsuch that the code length of goes to infinity and the\nrelative minimum distance for .\nTake as in Eq.(5.27 ###reference_###).\nThere is a positive real number such that\n\nfor large enough index . So we can further assume that\nTaking in Corollary 5.6 ###reference_theorem6### and Lemma 5.9 ###reference_theorem9###,\nand denoting by ,\nwe get\nBy Eq.(5.28 ###reference_###) and that , we get that\nTherefore, we can further assume that\n for .\nSo we can take .\nThen is a Hermitian self-dual -quasi-cyclic code\nof length with .\n\u220e"
64
+ },
65
+ {
66
+ "section_id": "6",
67
+ "parent_section_id": null,
68
+ "section_name": "Hermitian (Euclidean) self-dual -quasi\nconstacyclic codes",
69
+ "text": "In this section we prove that if \n( and , respectively),\nthen Hermitian self-dual (Euclidean self-dual, respectively)\n-quasi -constacyclic codes are asymptotically good.\nWe first relate \nwith ,\nand then turn to the Hermitian case ()\nand the Euclidean case ().\nAssume that and ,\nwhere as in Eq.(3.1 ###reference_###).\nThen there is a such that\n and the map\nsatisfies the following:\n(1) is an algebra isomorphism.\n(2)\nThe weight ,\n\u2009 .\n(3)\n, \u2009 .\nSince , there are integers such that .\nWe deduce that\n\nsince .\nTake , then .\nIt is known that (1) and (2) has been proved in [5 ###reference_b5###, Theorem 3.2].\nTherefore, it suffices to show that (3).\nLet and\n.\nThen and\n.\nBy Eq.(3.5 ###reference_###), we have that\nApplying the equality \nand the assumption , we get\nThus\nwhich proves (3).\n\u220e\nAssume that and ,\nwhere as in Eq.(3.1 ###reference_###). Let\nwhere is defined in Lemma 6.1 ###reference_theorem1###.\nThen the following hold.\n(1) is a module isomorphism.\n(2) , .\n(3) ,\n .\n(1) For and ,\nby Lemma 6.1 ###reference_theorem1###(1),\nThus Eq.(6.2 ###reference_###) is a module homomorphism.\nObserve that by Lemma 6.1 ###reference_theorem1###(1), must be bijective,\nhence it is a module isomorphism.\n(2) By Lemma 6.1 ###reference_theorem1###(2), we get\n(3) \nFor , ,\nwhere , and\n, ,\nit is obvious that\nThen\nWe are done.\n\u220e\nAssume that and , where\n as in Eq.(3.1 ###reference_###).\nLet be as in Eq.(6.2 ###reference_###).\nThen is an -submodule of if and only if\n is an -submodule of .\nAt that case, the following hold.\n(1) The rate .\n(2) The relative minimum distance .\n(3) is Galois -self-dual\nif and only if is Galois -self-dual.\nBy Corollary 6.2 ###reference_theorem2###(1),\nthe map in Eq.(6.2 ###reference_###)\nis a module isomorphism.\nSo is an -submodule of if and only if\n is an -submodule of .\nAssume that it is this case.\n(1) Since is an isomorphism,\n, and so\n(2) It holds by Corollary 6.2 ###reference_theorem2###(2) obviously.\n(3) Observe that by Corollary 6.2 ###reference_theorem2###(3),\n in \nif and only if in .\nFurther, applying the above conclusion (1) yields that\n if and only if .\nThus (3) holds.\n\u220e\nIn the following, we consider the asymptotic property of\nHermitian (Euclidean) self-dual -quasi\n-constacyclic codes.\nAssume that is even.\nThe Hermitian self-dual -quasi -constacyclic codes over \nare asymptotically good if and only if .\nIf ,\nthen Hermitian self-dual -quasi -constacyclic codes over \nare asymptotically bad, see Theorem 4.5 ###reference_theorem5###.\nAssume that .\nLet and .\nBy Theorem 5.10 ###reference_theorem10###, there are\nHermitian self-dual -quasi-cyclic codes \nover such that:\nthe code length of satisfy that is odd and coprime to \nand\n;\nin particular, , and hence ;\nthe rate \nand the relative minimum distance \u2009 for .\nLet ,\nwhere as in Eq.(3.1 ###reference_###).\nBy [23 ###reference_b23###, Lemma II.2],\nIf , then .\nSince ,\nthere are only finitely many such that .\nRemoving such , we can further assume that\n for . Thus,\napplying the isomorphism Eq.(6.2 ###reference_###) to \nyields in , .\nWe get the code sequence\nBy Theorem 6.3 ###reference_theorem3###, each \nis Hermitian self-dual -quasi -constacyclic codes over ,\nand the code length goes to infinity,\nthe rate \nand the relative minimum distance\n for .\n\u220e\nFor Euclidean case,\nthe \u201cEuclidean self-dual\u201d is referred to as \u201cself-dual\u201d.\nAssume that .\nThe self-dual -quasi -constacyclic codes over \nare asymptotically good if and only if .\nBy Theorem 4.5 ###reference_theorem5###, if then\nthe self-dual -quasi -constacyclic codes are asymptotically bad.\nIn the following we assume that .\nIf , it has been shown in [23 ###reference_b23###] (cf. Remark 2.6 ###reference_theorem6###)\nthat the self-dual -quasi-cyclic codes over are asymptotically good.\nAssume that \n(i.e., consider the -quasi negacyclic codes).\nLet and .\nSince , by\nRemark 2.6 ###reference_theorem6###, there are\nself-dual -quasi-cyclic codes over , such that:\nthe code length of satisfy that is odd and coprime to ,\nand ;\nin particular, , and hence ;\nthe rate \nand the relative minimum distance \u2009 for .\nSimilarly to the proof of Theorem 6.4 ###reference_theorem4###,\nwe get self-dual -quasi negacyclic codes\nover such that their code length goes to infinity,\n\nand their relative minimum distance for .\n\u220e"
70
+ },
71
+ {
72
+ "section_id": "7",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusions",
75
+ "text": "The purpose of this paper is to characterize\nthe Galois self-dual -quasi -constacyclic codes, and\nto investigate their asymptotic properties.\nWe first showed the algebraic structure of -quasi -constacyclic codes.\nThen we found that the Galois -self-dual -quasi -constacyclic codes\nbehave much differently according to whether \n( equivalently) or\n\n( equivalently).\nIn both the cases, we exhibited the necessary and sufficient conditions for the\n-quasi -constacyclic codes being Galois -self-dual,\nsee Theorem 4.1 ###reference_theorem1### (for the former case),\nLemma 4.9 ###reference_theorem9### and Theorem 4.10 ###reference_theorem10### (for the latter case).\nAnd in the former case we proved that\nthe Galois -self-dual -quasi -constacyclic codes\nare asymptotically bad.\nThen we focused on the case that .\nA methodological innovation is that we introduced\n(in Eq.(4.5 ###reference_###) and Lemma 4.8 ###reference_theorem8###)\nthe operator \u201c\u201d on the algebra ,\nwhich is proved to be a powerful technique for studying Galois dualities.\nAn important contribution is that we proved that\nthe Hermitian self-dual (when is even) and the\nEuclidean self-dual (when )\n-quasi -constacyclic codes are asymptotically good.\nThe proof are divided into two steps.\nFirst we proved (in Theorem 5.10 ###reference_theorem10###) the asymptotic goodness of the\nHermitian self-dual -quasi-cyclic codes\n(the asymptotic goodness of the Euclidean self-dual -quasi-cyclic codes\nhas been obtained in [23 ###reference_b23###]).\nAnd then we relate the -quasi -constacyclic codes\nto the -quasi-cyclic codes by an algebra isomorphism\nwhich preserves the Hamming weight and the Galois -inner products,\nsee Corollary 6.2 ###reference_theorem2###;\nhence the asymptotic goodness of the Hermitian self-dual and Euclidean self-dual\n-quasi -constacyclic codes are derived\n(Theorem 6.4 ###reference_theorem4###\nand Theorem 6.5 ###reference_theorem5###).\nA question remains unsolved:\nwith the assumption that ,\nare the Galois -self-dual -quasi -constacyclic codes over ,\nexcept for the Hermitian self-dual ones\nand the Euclidean self-dual (when ) ones,\nasymptotically good?\nIt seems that the existing approaches in this paper\nare not enough to solve this question.\nWe look forward this question to be solved perfectly in the future.\nA special sub-question is: are\nthe self-dual -quasi negacyclic codes over asymptotically good?\nThe result in [28 ###reference_b28###] and Theorem 6.5 ###reference_theorem5###\nof this paper together imply a positive answer to this sub-question;\nbut the argument of [28 ###reference_b28###] depends on Artin\u2019s primitive root conjecture.\nRecently, in [15 ###reference_b15###] we further developed a number-theoretic and algebraic method\nto analyse the -cosets and proved the asymptotic goodness of\nany -ary self-dual -quasi negacyclic codes."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {},
80
+ "image_paths": {},
81
+ "validation": true,
82
+ "references": [],
83
+ "url": "http://arxiv.org/html/2404.08402v2"
84
+ }
20241127/2404.11161v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2405.05160v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2405.11828v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2405.17472v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2405.19644v3.json ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery Videos",
3
+ "abstract": "Surgical phase recognition has gained significant attention due to its potential to offer solutions to numerous demands of the modern operating room. However, most existing methods concentrate on minimally invasive surgery (MIS), leaving surgical phase recognition for open surgery understudied. This discrepancy is primarily attributed to the scarcity of publicly available open surgery video datasets for surgical phase recognition. To address this issue, we introduce a new egocentric open surgery video dataset for phase recognition, named Egosurgery-Phase. This dataset comprises 15 hours of real open surgery videos spanning 9 distinct surgical phases all captured using an egocentric camera attached to the surgeon\u2019s head. In addition to video, the Egosurgery-Phase offers eye gaze. As far as we know, it is the first real open surgery video dataset for surgical phase recognition publicly available. Furthermore, inspired by the notable success of masked autoencoders (MAEs) in video understanding tasks (e.g., action recognition), we propose a gaze-guided masked autoencoder (GGMAE). Considering the regions where surgeons\u2019 gaze focuses are often critical for surgical phase recognition (e.g., surgical field), in our GGMAE, the gaze information acts as an empirical semantic richness prior to guiding the masking process, promoting better attention to semantically rich spatial regions. GGMAE significantly improves the previous state-of-the-art recognition method ( in Jaccard) and the masked autoencoder-based method ( in Jaccard) on Egosurgery-Phase. The dataset is released at project page.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Automated analysis of surgical videos is indispensable for various purposes, including providing real-time assistance to surgeons, supporting education, and evaluating medical treatments. Surgical phase recognition, the recognition of the transitions of high-level stages of surgery, is a fundamental component in advancing these objectives.\nSurgical phase recognition has gained considerable attention with numerous approaches [1 ###reference_b1###, 4 ###reference_b4###, 7 ###reference_b7###, 8 ###reference_b8###, 16 ###reference_b16###, 17 ###reference_b17###, 21 ###reference_b21###]. While surgical phase recognition is important across all surgical methods, the predominant focus of research endeavors has been on minimally invasive surgery (MIS), leaving open surgery phase recognition comparatively underexplored. This discrepancy primarily stems from the scarcity of publicly available large-scale open surgery datasets for phase recognition. In the surgical phase recognition for MIS, several large-scale datasets [17 ###reference_b17###, 20 ###reference_b20###] have been released, driving advancements in learning-based algorithms. Conversely, the absence of comparable large-scale datasets for open surgery phase recognition has significantly impeded progress in achieving accurate surgical phase recognition within the open surgery domain.\nTo tackle this issue, we introduce Egosurgery-Phase, the first large-scale egocentric open surgery video dataset for phase recognition. 20 videos of procedures of 10 distinct surgical types with a total duration of 15 hours conducted by 8 surgeons are collected and annotated into 9 phases. The videos have been meticulously pre-processed for de-identification. EgoSurgery-Phase offers a rich collection of video content capturing diverse interactions among individuals (e.g., surgeons, assistant surgeons, anesthesiologists, perfusionists, and nurses), varied operative settings, and various lighting conditions. Moreover, in addition to video, EgoSurgery-Phase provides eye gaze data.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### Furthermore, inspired by the remarkable performance of Masked Autoencoders (MAEs) [5 ###reference_b5###], which learns meaningful representations by reconstructing the masked tokens, in video understanding tasks (e.g., action recognition), we propose a gaze-guided masked autoencoder (GGMAE). In MAEs, for the selection of masked tokens, a random masking strategy has been often utilized and shown to work well compared to its counterparts in some cases [5 ###reference_b5###, 15 ###reference_b15###, 12 ###reference_b12###]. However, open surgery videos often contained non-informative regions (For instance, in most sample frames from EgoSurgery-Phase illustrated in Fig. 1 ###reference_###, we observe that the intense light from the surgical lamp causes the black clipping to outside the surgical field, making most of the tokens outside surgery field non-informative). Therefore, assuming all tokens have equal information and a uniform probability distribution for masked token selection is suboptimal. With the random masking strategy, masked tokens may be sampled from low-information regions rather than high-information ones, and training to reconstruct these tokens through MAEs is not effective [12 ###reference_b12###, 14 ###reference_b14###]. To address this issue, we propose a gaze-guided masking approach.\nGiven that regions, where surgeons\u2019 gaze focuses, are often critical for surgical phase recognition (e.g., the surgical field), our GGMAE leverages gaze information as an empirical semantic richness prior to guiding the masking process, as shown in Fig. 2 ###reference_###. It converts input gaze heatmaps into a probability distribution and employs reparameterization techniques for efficient probability-guided masked token sampling. Consequently, tokens that surgeons focus on are masked with higher probability, enabling enhanced attention to semantically rich spatial regions.\n###figure_10### ###figure_11### ###figure_12### ###figure_13### Our main contributions are summarized as follows: 1) we constructed the first publicity available large-scale real egocentric open surgery dataset, EgoSurgery-Phase, for phase recognition, 2) we propose a gaze-guided masked autoencoder, GGMAE, which incorporates gaze as an empirical semantic richness prior for masking, and 3) experimental results show that our GGMAE yields significant improvement over existing phase recognition and masked autoencoder-based methods, achieving the state-of-the-art performance on EgoSurgery-Phase."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Dataset Design",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Dataset collection",
21
+ "text": "Following the dataset collection protocol proposed in prior research [3 ###reference_b3###], which focused on constructing datasets for surgical tool detection in open surgery videos, we gathered 20 open surgery videos utilizing Tobii cameras attached to the surgeon\u2019s head. The recording of patient videos received ethical approval from the Keio University School of Medicine Ethics Committee, and written informed consent was obtained from all patients or their guardians. Our dataset encompasses 10 distinct types of surgeries, performed by 8 different surgeons.\nThe 20 videos were recorded at a frame rate of 25 fps and a resolution of pixels. Video durations vary between 28 and 234 minutes, reflecting the diversity in type and complexities of surgery. In total, 28 hours of surgical footage were captured. Unlike videos of minimally invasive surgery (MIS), open surgery videos are more likely to contain personally identifiable information (PII) such as the faces of patients, assistant surgeons, and nurses. To address privacy concerns, we subsampled the videos to 0.5 fps and anonymized the patient\u2019s face through blurring. In addition, we exclude frames containing other PII. After these pre-processing steps, the average duration of the videos becomes 46 minutes, resulting in a total duration of 15 hours, thereby yielding a large-scale dataset of high quality. In addition to video, EgoSurgery-Phase provides eye gaze.\n###figure_14###"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Dataset annotation, statistics and data split",
27
+ "text": "Expert surgeons perform the annotations based on their clinical experience and domain knowledge. The 20 pre-processed videos of open surgery are manually annotated into 9 phases: Disinfection, Design, Anesthesia, Incision, Dissection, Hemostasis, Irrigation, Closure, and Dressing. Samples are shown in Fig. 1 ###reference_###. In total, frames are manually annotated. The sample distribution is shown in Fig.3 ###reference_###. It reveals a notable class imbalance. We use videos for the training set, videos for the validation set, and videos for the test set."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Approach",
33
+ "text": ""
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Overview",
39
+ "text": "Fig. 4 ###reference_### presents an overview of the proposed GGMAE. GGMAE takes as input video and gaze heatmaps . Here, represents the input (RGB) channels, and denotes the spatial resolution of each frame. The space-time cube embedding [15 ###reference_b15###] is used to transform the input video into a set of token embeddings , where is the channel dimension of the tokens, and and are the numbers of tokens along the spatial and temporal dimensions, respectively. , , and represent the size of each token along the temporal, height, and width dimensions, respectively.\nWe apply the proposed Gaze-Guided Masking (GGM) strategy to select tokens for masking with a masking ratio , leveraging the gaze information. The remaining tokens, along with the space-time position embeddings, are fed into the Transformer encoder and decoder [18 ###reference_b18###] to reconstruct the masked maps.\n###figure_15###"
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Gaze-guided mask Masking",
45
+ "text": "Open surgery videos often contain non-informative regions, and training a model to reconstruct these tokens using MAE does not improve model performance [12 ###reference_b12###, 14 ###reference_b14###]. Therefore, inspired by representation learning approaches that leverage MAEs with non-uniform masking tailored to token informativeness across diverse domain data inputs [9 ###reference_b9###, 10 ###reference_b10###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###], we integrate gaze information as an empirical semantic richness prior to guide the masking of embedding features. Specifically, we propose non-uniform token sampling based on the accumulated gaze heatmap value of each token.\nFirst, we compute the accumulated gaze heatmap value for each token by summing the heatmap values across the pixels belonging to the token as follows:\nwhere denotes the set of pixels in the gaze heatmap corresponding to the -th token. We then calculate the masking probability vector for each token\u2019s time index using the softmax function as follows:\nwhere represents a vector of accumulated gaze heatmap for each time index , and is a hyper-parameter controlling the sharpness of the softmax function. Finally, the indices of the masked tokens are determined by sampling from a Multinomial distribution with probabilities , for trials without replacement for each time index ."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Loss function",
51
+ "text": "The loss function is the mean squared error (MSE) loss between the input pixel values and the reconstructed pixel values:\nwhere is the masked token index, is the set of masked tokens, represents the input ground truth frames, and stands for the reconstructed frames."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Implementation Details",
63
+ "text": "Network Architecture.\nWe employ the VideoMAE with the ViT-Small [2 ###reference_b2###] backbone. Following VidoeMAE [15 ###reference_b15###], we use the same input patch size of () for all models. We utilize 10-frame clips () as input, maintaining a fixed spatial resolution of () across all experiments. To generate the ground-truth gaze\nheatmaps, we place a Gaussian centered on the ground truth gaze point.\nPre-training details. During pre-training, the masking ratio of the input token is set to . We adopt the AdamW [11 ###reference_b11###] optimizer with a weight decay of and betas of (0.9, 0.95). We pre-train the network for epochs with a batch size of . The learning rate is linearly increased to from 0 in the first warmup epochs and then decreased to by the cosine decay schedule. We set the temperature hyperparameter to . The experiments are conducted using the PyTorch framework on three NVIDIA TITAN RTX GPUs.\nFine-tuning details. After the pre-training, we perform fine-tuning. An MLP head is attached to the pre-trained backbone and the whole network is fully fine-tuned for epochs with cross-entropy loss and a batch size of . The learning rate is linearly increased to from 0 in the first 5 warm-up epochs and then decreased to by the cosine decay schedule. To mitigate class imbalance during fine-tuning, we employ a resampling strategy. All hyperparameters are determined through standard coarse-to-fine grid search or step-by-step tuning."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "Evaluation metrics",
69
+ "text": "To quantitatively analyze the performance of our method, we use three widely used benchmark metrics for surgical phase recognition: precision, recall, and Jaccard index. Due to phase class imbalance inherent within the EgoSurgery-Phase dataset, the performance will be reported in macro-average. Macro-average is used in imbalanced multi-class settings as it provides equal emphasis on minority classes."
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "Phase recognition performance comparison",
75
+ "text": "Comparison with phase recognition methods:\nWe first compare our approach with current state-of-the-art phase recognition methods, including TeCNO [1 ###reference_b1###], Trans-SVNet [4 ###reference_b4###], and NETE [21 ###reference_b21###], alongside common baselines PhaseLSTM [16 ###reference_b16###] and PhaseNet [17 ###reference_b17###]. The performance of all methods is summarized in Table 1 ###reference_###. Our GGMAE notably surpasses the baselines in all metrics. Specifically, our method exhibits a substantial improvement over NETE, which is the best performance among previous state-of-the-art methods, by (from to ) in the Precision, (from to ) in the Recall, and (from to ) in the Jaccard index.\nComparison with masked autoencoder-based methods. After being pre-trained with the proposed GGMAE framework, the model exhibits significant performance improvements compared to the model trained from scratch ( improvement in the Jaccard index). We then compare current state-of-the-art MAE-based methods, namely VideoMAE and VideoMAEV2. Additionally, we evaluate our approach against SurgMAE, which first demonstrates the effectiveness of MAEs in the surgical domain. The performance of all methods is summarized in Table 2 ###reference_###. Employing the same backbone and training schema, GGMAE surpasses VideoMAE by and VideoMAEV2 by and SurgMAE by in terms of Jaccard index."
76
+ },
77
+ {
78
+ "section_id": "4.4",
79
+ "parent_section_id": "4",
80
+ "section_name": "Ablation study",
81
+ "text": "Mask sampling strategy. To verify the effectiveness of the proposed gaze-guided masking strategy, we compare its performance with that of random and tube masking. As we can see, our gaze-guided masking strategy brings absolute performance improvements of . This suggests that the gaze information, as an empirical semantic richness prior, can effectively guide the masking process.\nMasking Ratio. As shown in Tab 3 ###reference_### (b), we experimented with different masking ratios. Results show that either too large or too small masking ratios have a negative impact on performance. We empirically found that a masking ratio of exhibits the best results.\nTemerature parameter.\nWe experimented with different temperature parameters . As the temperature parameter decreases, the region toward which the gaze is directed becomes more likely to be masked. As shown in Tab 3 ###reference_### (c), Our GGMAE exhibits the best performance when temperature parameters is . Overall, a temperature parameter is set to by default.\n###table_1###"
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusion and Future Work",
87
+ "text": "In this paper, we construct the first egocentric open surgery video dataset, Egosurgery-Phase, for phase recognition. We also propose a gaze-guided masked autoencoder, GGMAE, to promote better attention to semantically rich spatial regions using gaze information. Furthermore, GGMAE achieves substantial improvements compared to the existing phase recognition methods and masked autoencoder methods. The remaining challenges for this dataset involve improving model performance on the Egosurgery-Phase. By releasing this dataset to the public, we, alongside the wider research community, aspire to address these challenges in the future collaboratively. Moreover, we intend to enrich this dataset by augmenting the video content and incorporating footage captured from various perspectives (e.g., assistant surgeons, anesthesiologists, perfusionists, and nurses) to advance the automated analysis of open surgery videos."
88
+ }
89
+ ],
90
+ "appendix": [],
91
+ "tables": {
92
+ "1": {
93
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Performance comparison with baseline and state-of-the-art phase recognition models on EgoSurgery-Phase.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.1\" style=\"width:325.2pt;height:106.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-29.6pt,9.7pt) scale(0.845818809765425,0.845818809765425) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.1\">Methods</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.2\">Backbone</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.3\">Precision</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.4\">Recall</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.5\">Jaccard</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.1\">PhaseLSTM\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib16\" title=\"\">16</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.2\">AlexNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.3\">36.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.4\">33.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.5\">21.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.1\">PhaseNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib17\" title=\"\">17</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.2\">AlexNet</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.3\">37.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.4\">25.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.5\">19.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.1\">TeCNO\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib1\" title=\"\">1</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.2\">ResNet-50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.3\">47.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.4\">39.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.5\">27.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.5.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.1\">Trans-SVNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib4\" title=\"\">4</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.2\">ResNet-50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.3\">41.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.4\">35.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.5\">23.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.6.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.1\">NETE\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib21\" title=\"\">21</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.2\">Inception v3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.3\">43.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.4\">35.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.5\">27.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.7.6\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.1.7.6.1\"><span class=\"ltx_text\" id=\"S4.T1.1.1.7.6.1.1\" style=\"background-color:#E6E6E6;\">GGMAE (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.1.7.6.2\"><span class=\"ltx_text\" id=\"S4.T1.1.1.7.6.2.1\" style=\"background-color:#E6E6E6;\">ViT-S</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.1.7.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.7.6.3.1\" style=\"background-color:#E6E6E6;\">51.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.1.7.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.7.6.4.1\" style=\"background-color:#E6E6E6;\">45.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.1.7.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.7.6.5.1\" style=\"background-color:#E6E6E6;\">33.9</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
94
+ "capture": "Table 1: Performance comparison with baseline and state-of-the-art phase recognition models on EgoSurgery-Phase."
95
+ },
96
+ "2": {
97
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance comparison with state-of-the-art masked autoencoder-based models on Egosurgery-Phase. The supervised baseline is ViT-S trained from scratch on Egosurgery-Phase.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.1\" style=\"width:433.6pt;height:95.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-29.2pt,6.4pt) scale(0.881146825536357,0.881146825536357) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.1\">Methods</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.2\">Backbone</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.3\">Masking</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.4\">Precision</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.5\">Recall</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.6\">Jaccard</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.1\">Supervised</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.2\">ViT-S</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T2.1.1.2.1.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.4\">47.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.5\">31.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.6\">27.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.1\">VideoMAE\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib15\" title=\"\">15</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.2\">ViT-S</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.3\">Tube masking</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.4\">49.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.5\">41.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.6\">29.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.1\">VideoMAE V2\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib19\" title=\"\">19</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.2\">ViT-S</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.3\">Dual masking</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.4.3.4.1\">54.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.5\">43.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.6\">30.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.5.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.4.1\">SurgMAE\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib6\" title=\"\">6</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.4.2\">ViT-S</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.4.3\">Spatio-temporal masking</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.4.4\">52.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.4.5\">41.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.4.6\">27.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.6.5\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.1.6.5.1\"><span class=\"ltx_text\" id=\"S4.T2.1.1.6.5.1.1\" style=\"background-color:#E6E6E6;\">GGMAE (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.1.6.5.2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.6.5.2.1\" style=\"background-color:#E6E6E6;\">ViT-S</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.1.6.5.3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.6.5.3.1\" style=\"background-color:#E6E6E6;\">Gaze-guided masking</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.1.6.5.4\"><span class=\"ltx_text\" id=\"S4.T2.1.1.6.5.4.1\" style=\"background-color:#E6E6E6;\">51.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.1.6.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.6.5.5.1\" style=\"background-color:#E6E6E6;\">45.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.1.6.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.6.5.6.1\" style=\"background-color:#E6E6E6;\">33.9</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
98
+ "capture": "Table 2: Performance comparison with state-of-the-art masked autoencoder-based models on Egosurgery-Phase. The supervised baseline is ViT-S trained from scratch on Egosurgery-Phase."
99
+ },
100
+ "3": {
101
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Ablation studies on Egosurgery-Phase. We use ViT-S as a backbone for all the experiments.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.3.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.3.3.3.3.3\">\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.2.2.2.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.2.2.2.2.2.3\"><span class=\"ltx_text\" id=\"S4.T3.2.2.2.2.2.2.3.1\" style=\"font-size:80%;\">(a) Mask sampling strategy.</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.1.1.1.1.1\"><span class=\"ltx_text\" id=\"S4.T3.1.1.1.1.1.1.1.1\" style=\"font-size:80%;\">(b) Masking ratio ()</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.2.2.2.2.2.2\"><span class=\"ltx_text\" id=\"S4.T3.2.2.2.2.2.2.2.1\" style=\"font-size:80%;\">(c) Temperature parameter ().</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.3.3.3.3.3.3.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.3.3.3.3.3.3.2.1.1.1\">Strategy</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.3.3.3.3.3.3.2.1.1.2\">Ratio</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.3.3.3.3.3.3.2.1.1.3\">Jaccard</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.2.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.3.3.3.3.2.1.2.1\">Random\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib5\" title=\"\">5</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.3.3.3.3.2.1.2.2\">0.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.3.3.3.3.2.1.2.3\">28.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.2.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.2.1.3.1\">Random\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib5\" title=\"\">5</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.2.1.3.2\">0.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.2.1.3.3\">30.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.2.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.2.1.4.1\">Tube\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.19644v3#bib.bib15\" title=\"\">15</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.2.1.4.2\">0.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.2.1.4.3\">29.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.2.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.3.3.3.3.3.3.2.1.5.1\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.3.3.3.2.1.5.1.1\" style=\"background-color:#E6E6E6;\">Gaze-guided</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.3.3.3.3.3.3.2.1.5.2\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.3.3.3.2.1.5.2.1\" style=\"background-color:#E6E6E6;\">0.90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.3.3.3.3.3.3.2.1.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.3.3.3.3.3.2.1.5.3.1\" style=\"background-color:#E6E6E6;\">33.9</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.3\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.3.3.3.3.3.3.3.1\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.3.3.3.3.3.3.3.1.1.1\">Ratio</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.3.3.3.3.3.3.3.1.1.2\">Jaccard</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.3.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.3.3.3.3.3.1.2.1\">0.95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.3.3.3.3.3.1.2.2\">31.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.3.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.3.1.3.1\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.3.3.3.3.1.3.1.1\" style=\"background-color:#E6E6E6;\">0.90</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.3.1.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.3.3.3.3.3.3.1.3.2.1\" style=\"background-color:#E6E6E6;\">33.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.3.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.3.1.4.1\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.3.3.3.3.1.4.1.1\" style=\"background-color:#FFFFFF;\">0.85</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.3.1.4.2\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.3.3.3.3.1.4.2.1\" style=\"background-color:#FFFFFF;\">31.6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.3.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.3.3.3.3.3.3.3.1.5.1\">0.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.3.3.3.3.3.3.3.1.5.2\">31.5</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.3.3.3.3.3.3.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.3.3.3.3.3.3.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.3.3.3.3.3.3.1.1.1.2\">Jaccard</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.3.3.3.3.1.1.2.1\">1.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.3.3.3.3.1.1.2.2\">30.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.1.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.1.1.3.1\">0.75</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.1.1.3.2\">30.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.1.1.4.1\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.3.3.3.1.1.4.1.1\" style=\"background-color:#E6E6E6;\">0.50</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3.3.3.1.1.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.3.3.3.3.3.1.1.4.2.1\" style=\"background-color:#E6E6E6;\">33.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3.3.3.3.1.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.3.3.3.3.3.3.1.1.5.1\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.3.3.3.1.1.5.1.1\" style=\"background-color:#FFFFFF;\">0.25</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.3.3.3.3.3.3.1.1.5.2\"><span class=\"ltx_text\" id=\"S4.T3.3.3.3.3.3.3.1.1.5.2.1\" style=\"background-color:#FFFFFF;\">27.2</span></td>\n</tr>\n</table>\n</td>\n</tr>\n</table>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
102
+ "capture": "Table 3: Ablation studies on Egosurgery-Phase. We use ViT-S as a backbone for all the experiments."
103
+ }
104
+ },
105
+ "image_paths": {
106
+ "1(a)": {
107
+ "figure_path": "2405.19644v3_figure_1(a).png",
108
+ "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.",
109
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/disinfection.jpg"
110
+ },
111
+ "1(b)": {
112
+ "figure_path": "2405.19644v3_figure_1(b).png",
113
+ "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.",
114
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/design.jpg"
115
+ },
116
+ "1(c)": {
117
+ "figure_path": "2405.19644v3_figure_1(c).png",
118
+ "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.",
119
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/anesthesia.jpg"
120
+ },
121
+ "1(d)": {
122
+ "figure_path": "2405.19644v3_figure_1(d).png",
123
+ "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.",
124
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/incision.jpg"
125
+ },
126
+ "1(e)": {
127
+ "figure_path": "2405.19644v3_figure_1(e).png",
128
+ "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.",
129
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/disssection.jpg"
130
+ },
131
+ "1(f)": {
132
+ "figure_path": "2405.19644v3_figure_1(f).png",
133
+ "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.",
134
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/hemostasis.jpg"
135
+ },
136
+ "1(g)": {
137
+ "figure_path": "2405.19644v3_figure_1(g).png",
138
+ "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.",
139
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/irrigation.jpg"
140
+ },
141
+ "1(h)": {
142
+ "figure_path": "2405.19644v3_figure_1(h).png",
143
+ "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.",
144
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/closure.jpg"
145
+ },
146
+ "1(i)": {
147
+ "figure_path": "2405.19644v3_figure_1(i).png",
148
+ "caption": "Figure 1: Illustration of 9 surgical phases (P1-P9) annotated in the EgoSurgery-Phase dataset. Typically, the phases are executed sequentially from P1 to P9.",
149
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/phase_examples/dressing.jpg"
150
+ },
151
+ "2(a)": {
152
+ "figure_path": "2405.19644v3_figure_2(a).png",
153
+ "caption": "Figure 2: Example of RGB image and gaze heatmap from EgoSurgery-Phase, along with their corresponding random mask and gaze-guided mask. The gaze heatmap is depicted as a heatmap overlaid onto the RGB image for visualization purposes.",
154
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/mask_examples/rgb.jpg"
155
+ },
156
+ "2(b)": {
157
+ "figure_path": "2405.19644v3_figure_2(b).png",
158
+ "caption": "Figure 2: Example of RGB image and gaze heatmap from EgoSurgery-Phase, along with their corresponding random mask and gaze-guided mask. The gaze heatmap is depicted as a heatmap overlaid onto the RGB image for visualization purposes.",
159
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/mask_examples/13_1_0258.jpg"
160
+ },
161
+ "2(c)": {
162
+ "figure_path": "2405.19644v3_figure_2(c).png",
163
+ "caption": "Figure 2: Example of RGB image and gaze heatmap from EgoSurgery-Phase, along with their corresponding random mask and gaze-guided mask. The gaze heatmap is depicted as a heatmap overlaid onto the RGB image for visualization purposes.",
164
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/mask_examples/random_mask.jpg"
165
+ },
166
+ "2(d)": {
167
+ "figure_path": "2405.19644v3_figure_2(d).png",
168
+ "caption": "Figure 2: Example of RGB image and gaze heatmap from EgoSurgery-Phase, along with their corresponding random mask and gaze-guided mask. The gaze heatmap is depicted as a heatmap overlaid onto the RGB image for visualization purposes.",
169
+ "url": "http://arxiv.org/html/2405.19644v3/extracted/6028010/figs/mask_examples/gaze_guided_mask.jpg"
170
+ },
171
+ "3": {
172
+ "figure_path": "2405.19644v3_figure_3.png",
173
+ "caption": "Figure 3: The phase distribution of frames.",
174
+ "url": "http://arxiv.org/html/2405.19644v3/x1.png"
175
+ },
176
+ "4": {
177
+ "figure_path": "2405.19644v3_figure_4.png",
178
+ "caption": "Figure 4: Overview of the proposed GGMAE: GGME performs the task of masking tokens and reconstructing these masked tokens with Transformer encoder-decoder architecture. Considering that open surgery videos often contain non-informative regions, we introduce the Gaze-Guided Masking (GGM) module, which selects tokens to be masked based on gaze information.",
179
+ "url": "http://arxiv.org/html/2405.19644v3/x2.png"
180
+ }
181
+ },
182
+ "validation": true,
183
+ "references": [],
184
+ "url": "http://arxiv.org/html/2405.19644v3"
185
+ }
20241127/2406.03095v4.json ADDED
@@ -0,0 +1,316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "EgoSurgery-Tool: A Dataset of Surgical Tool and Hand Detection from Egocentric Open Surgery Videos",
3
+ "abstract": "Surgical tool detection is a fundamental task for understanding egocentric open surgery videos. However, detecting surgical tools presents significant challenges due to their highly imbalanced class distribution, similar shapes and similar textures, and heavy occlusion. The lack of a comprehensive large-scale dataset compounds these challenges. In this paper, we introduce EgoSurgery-Tool, an extension of the existing EgoSurgery-Phase dataset, which contains real open surgery videos captured using an egocentric camera attached to the surgeon\u2019s head, along with phase annotations. EgoSurgery-Tool has been densely annotated with surgical tools and comprises over 49K surgical tool bounding boxes across 15 categories, constituting a large-scale surgical tool detection dataset. EgoSurgery-Tool also provides annotations for hand detection with over 46K hand-bounding boxes, capturing hand-object interactions that are crucial for understanding activities in egocentric open surgery. EgoSurgery-Tool is superior to existing datasets due to its larger scale, greater variety of surgical tools, more annotations, and denser scenes. We conduct a comprehensive analysis of EgoSurgery-Tool using nine popular object detectors to assess their effectiveness in both surgical tool and hand detection. The dataset will be released at project page.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Detecting surgical tools from an egocentric perspective in the operating room is fundamental task for the development of intelligent systems that can assist surgeons in real-time. For example, recognizing a tool can help prevent accidents, such as leaving gauze inside the body, by notifying surgeons. Recently, various approaches have been proposed for surgical tool detection, particularly in minimally invasive surgeries (MIS)[15 ###reference_b15###, 19 ###reference_b19###, 10 ###reference_b10###, 17 ###reference_b17###, 1 ###reference_b1###, 26 ###reference_b26###, 8 ###reference_b8###]. However, there have been few attempts to detect surgical tools in open surgery videos due to the limited availability of large-scale datasets. The existing surgical tool detection datasets for open surgery are either small[6 ###reference_b6###] or not publicly available [7 ###reference_b7###]. In contrast, several datasets [10 ###reference_b10###, 17 ###reference_b17###, 13 ###reference_b13###] have been released for MIS, driving advancements in learning-based algorithms. The absence of comparable large-scale datasets for open surgical tool detection has significantly impeded progress in achieving accurate tool detection within the open surgery domain. Challenges include dealing with surgical tools that exhibit a highly imbalanced, long-tailed distribution, have similar textures and shapes, and appear in occluded scenes, posing new challenges for many existing approaches.\nHand detection is an essential task for egocentric video analysis, where hand-object interaction (HOI) is crucial for action localization and understanding in activities of daily living. Several large-scale hand detection datasets have been proposed [2 ###reference_b2###, 3 ###reference_b3###, 16 ###reference_b16###] for detecting hands in daily activities. Localizing hands is also vital for analyzing egocentric open surgery videos. However, there is little work on hand detection in the open surgery domain [6 ###reference_b6###, 21 ###reference_b21###], and only one small publicly available dataset exists [6 ###reference_b6###]. Training on existing hand datasets from daily activities does not transfer well to surgical hand detection due to significant differences in domain appearance, highlighting the need for a large-scale dataset.\nWith these motivations, we introduce EgoSurgery-Tool, a large-scale dataset captured from a camera attached to the surgeon\u2019s head, containing dense annotations for surgical tools and the surgeon\u2019s hand-bounding boxes. EgoSurgery-Tool is an extension of the recently introduced EgoSurgery-Phase [9 ###reference_b9###]. We now elaborate on the unique characteristics and differences between the existing dataset [6 ###reference_b6###] and our proposed EgoSurgery-Tool dataset. Compared to the existing dataset [6 ###reference_b6###], EgoSurgery-Tool offers several advantages: 1) it is the largest-scale dataset among tool and hand detection datasets in the open surgery domain in terms of the number of images and annotations; 2) it contains a greater variety of surgical tools; 3) it includes high-density scenes with numerous surgical tools; and 4) each hand annotation specifies hand identification (the camera wearer\u2019s left or right hand or another person\u2019s left or right hand). Our dataset is compared with existing related datasets in Table 1 ###reference_###, and example images are shown in Figure 1 ###reference_###. Based on the proposed EgoSurgery-Tool dataset, we provide a systematic study on nine mainstream baselines."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "EgoSurgery-Tool Dataset",
15
+ "text": "The EgoSurgery-Phase dataset [9 ###reference_b9###] consists of 21 videos covering 10 distinct surgical procedures, with a total duration of 15 hours, performed by 8 surgeons. EgoSurgery-Phase provides over 27K frames with phase annotations. However, EgoSurgery-Phase lacks sufficient information on surgical tools and hands. Therefore, we propose EgoSurgery-Tool, which includes additional annotations for surgical tools and hands on a subset of the existing EgoSurgery-Phase dataset. These annotations make EgoSurgery-Phase the only available dataset for multi-task learning of phase recognition, surgical tool detection, and hand detection. EgoSurgery-Phase is manually annotated by a group of annotators who were instructed for each task to ensure consistency across the dataset. The annotations were then inspected by expert surgeons to assess their quality. The rest of this section provides details on the annotations, benchmarking, and statistics of EgoSurgery-Tool.\n###figure_1### ###figure_2### ###table_1###"
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Data splits and statistic",
21
+ "text": "We annotated 15 types of surgical tools and 4 types of hands in 15 videos from the EgoSurgery-Phase dataset. The proposed EgoSurgery-Tool dataset contains 15,437 high-quality images, annotated with 49,652 surgical tools and 46,320 hands. The distribution of surgical tools, shown in Figure 2 ###reference_###, reveals a notable class imbalance. Figure 3 ###reference_### shows The distribution of hand. Table 2 ###reference_### shows the number of images within each instance count range (0-5, 6-10, 11-15). Our EgoSurgery-Phase dataset demonstrates higher density compared to the surgical tool detection dataset in MIS. The co-occurrence matrix between surgical tools and surgical phases is presented in Figure 4 ###reference_###. Along the Y-axis are the given surgical tools, and the X-axis enumerates conditional phases. Each element represents the conditional probability that a phase occurs when a surgical tool is used. For example, when a scalpel appears in a frame, that frame belongs to the incision phase with a probability of 0.98. Surgical tool information might be helpful for surgical phase recognition. EgoSurgery-Tool is divided into training, validation, and test sets at the video level, ensuring that all frames of a video sequence appear in one specific split. The 15 video sequences are split into 10 training, 2 validation, and 3 test videos for consistency with the standard evaluation of other relevant datasets, resulting in 9,657 training, 1,515 validation, and 4,265 test images. The number of instances per category in each set is shown in Table 3 ###reference_###.\n###figure_3###"
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "Experiments",
27
+ "text": ""
28
+ },
29
+ {
30
+ "section_id": "3.1",
31
+ "parent_section_id": "3",
32
+ "section_name": "Experimental setups",
33
+ "text": "We compare nine popular object detectors: Faster R-CNN\n(2015) [14 ###reference_b14###], RetinaNet (2017) [11 ###reference_b11###], Cascade R-CNN (2018) [4 ###reference_b4###], CenterNet (2019) [24 ###reference_b24###], Sparse R-CNN (2021) [18 ###reference_b18###], VarifocalNet (2021) [23 ###reference_b23###], Deformable-DETER (2021) [25 ###reference_b25###], DDQ (2023) [22 ###reference_b22###], and DINO (2023) [20 ###reference_b20###]. We use the MMDetection [5 ###reference_b5###] for the implementation. We fine-tune models with pre-trained on MS-COCO [12 ###reference_b12###]. For\na fair comparison, we select the algorithm\u2019s backbones to\nhave a similar number of parameters. We use the COCO evaluation procedure and report , , and [12 ###reference_b12###]. Because each detector is calibrated differently, setting a comparable detection confidence threshold is impractical. Therefore, we evaluate all the detectors by using confidence .\n###figure_4### ###table_2### ###figure_5###"
34
+ },
35
+ {
36
+ "section_id": "3.2",
37
+ "parent_section_id": "3",
38
+ "section_name": "Quantitative results",
39
+ "text": "We present the results of nine mainstream object detection algorithms in Table 4 ###reference_###. For surgical tool detection, among all methods, the recent VarifocalNet achieves the highest performance in terms of the metric for surgical tool detection tasks. VarifocalNet also consistently outperforms other detectors in terms of and , indicating its superior ability to estimate the correct bounding box sizes. The superiority of VarifocalNet is attributed to its dense object detection capability, enabling it to detect objects at small scales and under heavy occlusion. For hand detection, VarifocalNet outperforms other object detection methods in terms of and . In terms of , DINO achieves the best performance.\nThe confusion matrix for the standard object detection method, Faster R-CNN, is shown in Figure 6 ###reference_###. We observe that tools with similar textures and shapes are often misclassified (e.g., scissors and needle holders). Additionally, tools with many varieties of appearances are confused with backgrounds (e.g., forceps, gauze, and retractors).\nWe compare the hand detection performance of different training data and pre-training data settings using Faster R-CNN in Table5 ###reference_###. Training with our EgoSurgery-Tool dataset significantly outperforms training with the existing hand dataset, EgoHands, which was collected in a daily living setting. Despite the vast quantity of annotated data in EgoHands, models trained solely on EgoHand perform substantially worse compared to those trained with our EgoSurgery-Tool, suggesting a significant domain transfer problem related to the characteristics and representation of hands in a surgical environment. We also explored the performance of hand detection with different pre-training data. Pre-training with COCO achieves the best performance. Due to the significant domain gap, pre-training with the existing hand detection dataset, EgoHands, degrades performance."
40
+ },
41
+ {
42
+ "section_id": "3.3",
43
+ "parent_section_id": "3",
44
+ "section_name": "Qualitative results",
45
+ "text": "Figure 5 ###reference_### presents qualitative results for Faster-RCNN using IoU thresholds of . The model successfully detects surgical tools in (a, b) and hands wearing different colors of surgeons\u2019 gloves in (c, d) across a variety of surgery types. Examples of detection failures are shown in (e)-(h). Heavy occlusion (e, h), poor lighting conditions (f), and similar shapes and textures between categories (e, g) cause these incorrect detections."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Conclusion",
51
+ "text": "To address the lack of a large-scale dataset in the open surgery domain, we introduce EgoSurgery-Tool, an egocentric open surgery video dataset captured from a camera attached to the surgeon\u2019s head, including bounding box annotations for surgical tools and hands. We conducted extensive evaluations of recent object detection methods on this new benchmark dataset. We believe the dense annotations of EgoSurgery-Tool will foster future research in video understanding within the open surgery domain."
52
+ }
53
+ ],
54
+ "appendix": [],
55
+ "tables": {
56
+ "1": {
57
+ "table_html": "<figure class=\"ltx_table\" id=\"S0.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S0.T1.3.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S0.T1.4.2\" style=\"font-size:90%;\">Comparisons of EgoSurgery-Tool and existing datasets for surgical tool detection. <span class=\"ltx_text ltx_font_italic\" id=\"S0.T1.4.2.1\">OS</span> indicates open surgery.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S0.T1.5\" style=\"width:496.9pt;height:59.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-124.9pt,15.1pt) scale(0.665360159047416,0.665360159047416) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S0.T1.5.1\">\n<tr class=\"ltx_tr\" id=\"S0.T1.5.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S0.T1.5.1.1.1\">Dataset</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S0.T1.5.1.1.2\">Surgery type</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S0.T1.5.1.1.3\">Frames</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S0.T1.5.1.1.4\">Tool instances</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S0.T1.5.1.1.5\">Hand instances</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S0.T1.5.1.1.6\">Tool categories</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S0.T1.5.1.1.7\">Hand categories</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S0.T1.5.1.1.8\">Tool Instances per frame</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.5.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S0.T1.5.1.2.1\">m2cai16-tool-locations\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib10\" title=\"\">10</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S0.T1.5.1.2.2\">MIS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S0.T1.5.1.2.3\">2.8K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S0.T1.5.1.2.4\">3.9K</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S0.T1.5.1.2.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S0.T1.5.1.2.6\">7</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S0.T1.5.1.2.7\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S0.T1.5.1.2.8\">1.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.5.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S0.T1.5.1.3.1\">Cholec80-locations\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib17\" title=\"\">17</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.3.2\">MIS</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.3.3\">4.0K</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.3.4\">6.5K</td>\n<td class=\"ltx_td\" id=\"S0.T1.5.1.3.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.3.6\">7</td>\n<td class=\"ltx_td\" id=\"S0.T1.5.1.3.7\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.3.8\">1.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.5.1.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S0.T1.5.1.4.1\">AVOS dataset\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib6\" title=\"\">6</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.4.2\">OS</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.4.3\">3.3K</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.4.4\">2.8K</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.4.5\">6.2K</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.4.6\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.4.7\">1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S0.T1.5.1.4.8\">0.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.5.1.5\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S0.T1.5.1.5.1\"><span class=\"ltx_text\" id=\"S0.T1.5.1.5.1.1\" style=\"background-color:#E6E6E6;\">EgoSurgery-Tool (Ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S0.T1.5.1.5.2\"><span class=\"ltx_text\" id=\"S0.T1.5.1.5.2.1\" style=\"background-color:#E6E6E6;\">OS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S0.T1.5.1.5.3\"><span class=\"ltx_text\" id=\"S0.T1.5.1.5.3.1\" style=\"background-color:#E6E6E6;\">15.4K</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S0.T1.5.1.5.4\"><span class=\"ltx_text\" id=\"S0.T1.5.1.5.4.1\" style=\"background-color:#E6E6E6;\">49.7K</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S0.T1.5.1.5.5\"><span class=\"ltx_text\" id=\"S0.T1.5.1.5.5.1\" style=\"background-color:#E6E6E6;\">46.3K</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S0.T1.5.1.5.6\"><span class=\"ltx_text\" id=\"S0.T1.5.1.5.6.1\" style=\"background-color:#E6E6E6;\">15</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S0.T1.5.1.5.7\"><span class=\"ltx_text\" id=\"S0.T1.5.1.5.7.1\" style=\"background-color:#E6E6E6;\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S0.T1.5.1.5.8\"><span class=\"ltx_text\" id=\"S0.T1.5.1.5.8.1\" style=\"background-color:#E6E6E6;\">3.2</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
58
+ "capture": "Table 1: Comparisons of EgoSurgery-Tool and existing datasets for surgical tool detection. OS indicates open surgery."
59
+ },
60
+ "2": {
61
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S2.T2.2.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S2.T2.3.2\" style=\"font-size:90%;\">Comparison of datasets with respect to image distribution across various instance count ranges. We compute the number of images for each dataset within three count ranges.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S2.T2.4\" style=\"width:216.8pt;height:34.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-137.2pt,21.6pt) scale(0.441395797413331,0.441395797413331) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T2.4.1\">\n<tr class=\"ltx_tr\" id=\"S2.T2.4.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T2.4.1.1.1\"><span class=\"ltx_text\" id=\"S2.T2.4.1.1.1.1\">Datasets</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T2.4.1.1.2\"><span class=\"ltx_text\" id=\"S2.T2.4.1.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T2.4.1.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S2.T2.4.1.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T2.4.1.1.2.1.1.1.1\"># Image</span></span>\n<span class=\"ltx_tr\" id=\"S2.T2.4.1.1.2.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T2.4.1.1.2.1.1.2.1\">(0-5 instances)</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T2.4.1.1.3\"><span class=\"ltx_text\" id=\"S2.T2.4.1.1.3.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T2.4.1.1.3.1.1\">\n<span class=\"ltx_tr\" id=\"S2.T2.4.1.1.3.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T2.4.1.1.3.1.1.1.1\"># Image</span></span>\n<span class=\"ltx_tr\" id=\"S2.T2.4.1.1.3.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T2.4.1.1.3.1.1.2.1\">(6-10 instances)</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T2.4.1.1.4\"><span class=\"ltx_text\" id=\"S2.T2.4.1.1.4.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T2.4.1.1.4.1.1\">\n<span class=\"ltx_tr\" id=\"S2.T2.4.1.1.4.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T2.4.1.1.4.1.1.1.1\"># Image</span></span>\n<span class=\"ltx_tr\" id=\"S2.T2.4.1.1.4.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T2.4.1.1.4.1.1.2.1\">(11-15 instances)</span></span>\n</span></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.4.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.4.1.2.1\">m2cai16-tool-locations\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib10\" title=\"\">10</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.4.1.2.2\">2,811</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.4.1.2.3\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T2.4.1.2.4\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.4.1.3\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T2.4.1.3.1\"><span class=\"ltx_text\" id=\"S2.T2.4.1.3.1.1\" style=\"background-color:#E6E6E6;\">EgoSurgery-Tool</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T2.4.1.3.2\"><span class=\"ltx_text\" id=\"S2.T2.4.1.3.2.1\" style=\"background-color:#E6E6E6;\">6,128</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T2.4.1.3.3\"><span class=\"ltx_text\" id=\"S2.T2.4.1.3.3.1\" style=\"background-color:#E6E6E6;\">8,803</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T2.4.1.3.4\"><span class=\"ltx_text\" id=\"S2.T2.4.1.3.4.1\" style=\"background-color:#E6E6E6;\">506</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
62
+ "capture": "Table 2: Comparison of datasets with respect to image distribution across various instance count ranges. We compute the number of images for each dataset within three count ranges."
63
+ },
64
+ "3": {
65
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S2.T3.2.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S2.T3.3.2\" style=\"font-size:90%;\">The number of instances per category in each set and the category distribution in the EgoSurgery-Tool dataset.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S2.T3.4\">\n<tr class=\"ltx_tr\" id=\"S2.T3.4.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.1.1\">(a) The number of instances per surgical tool category.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T3.4.2.1.1\" style=\"width:216.8pt;height:232.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-34.1pt,36.6pt) scale(0.760587019019252,0.760587019019252) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T3.4.2.1.1.1\">\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.2.1.1.1.1.1\">Class</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.2.1.1.1.1.2\">Train</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.2.1.1.1.1.3\">Val</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.2.1.1.1.1.4\">Test</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.2.1.1.1.1.5\">Total</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.2.1.1.1.1.6\">Dist.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.2.1.1.1.2.1\">Bipolar Forceps</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.2.1.1.1.2.2\">446</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.2.1.1.1.2.3\">55</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.2.1.1.1.2.4\">195</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.2.1.1.1.2.5\">696</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.2.1.1.1.2.6\">1.40%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.3.1\">Electric Cautery</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.3.2\">1,404</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.3.3\">101</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.3.4\">162</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.3.5\">1,667</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.3.6\">3.36%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.4.1\">Forceps</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.4.2\">2,534</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.4.3\">154</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.4.4\">3,375</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.4.5\">6,063</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.4.6\">1.22%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.5.1\">Gauze</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.5.2\">4,596</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.5.3\">455</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.5.4\">1644</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.5.5\">6,695</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.5.6\">13.58%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.6.1\">Hook</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.6.2\">1,045</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.6.3\">147</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.6.4\">157</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.6.5\">1,349</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.6.6\">2.72%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.7.1\">Mouth Gag</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.7.2\">3,807</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.7.3\">990</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.7.4\">1,188</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.7.5\">5,985</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.7.6\">12.05%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.8.1\">Needle Holders</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.8.2\">3,031</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.8.3\">512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.8.4\">1,286</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.8.5\">4,829</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.8.6\">9.73%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.9.1\">Raspatory</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.9.2\">654</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.9.3\">76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.9.4\">84</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.9.5\">814</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.9.6\">1.64%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.10.1\">Retractor</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.10.2\">2,079</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.10.3\">0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.10.4\">325</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.10.5\">2,404</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.10.6\">4.84%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.11.1\">Scalpel</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.11.2\">739</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.11.3\">168</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.11.4\">159</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.11.5\">1,066</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.11.6\">2.15%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.12.1\">Scissors</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.12.2\">1,780</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.12.3\">391</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.12.4\">565</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.12.5\">2,736</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.12.6\">5.51%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.13.1\">Skewer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.13.2\">212</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.13.3\">103</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.13.4\">29</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.13.5\">344</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.13.6\">0.69%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.14.1\">Suction Cannula</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.14.2\">3,134</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.14.3\">509</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.14.4\">768</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.14.5\">4,411</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.14.6\">8.88%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.15.1\">Syringe</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.15.2\">344</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.15.3\">96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.15.4\">141</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.15.5\">581</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.15.6\">1.17%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.16.1\">Tweezers</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.16.2\">6,467</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.16.3\">950</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.16.4\">2,595</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.16.5\">10,012</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.2.1.1.1.16.6\">20.16%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.2.1.1.1.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.2.1.1.1.17.1\">Total</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.2.1.1.1.17.2\">32,272</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.2.1.1.1.17.3\">4,707</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.2.1.1.1.17.4\">12,673</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.2.1.1.1.17.5\">49,652</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.2.1.1.1.17.6\">100%</td>\n</tr>\n</table>\n</span></div>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.3.1\">(b) The number of instances per hand category.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T3.4.4.1.1\" style=\"width:216.8pt;height:81.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-34.7pt,13.1pt) scale(0.75733987039548,0.75733987039548) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T3.4.4.1.1.1\">\n<tr class=\"ltx_tr\" id=\"S2.T3.4.4.1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.4.1.1.1.1.1\">Class</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.4.1.1.1.1.2\">Train</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.4.1.1.1.1.3\">Val</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.4.1.1.1.1.4\">Test</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.4.1.1.1.1.5\">Total</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S2.T3.4.4.1.1.1.1.6\">Dist.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.4.1.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.4.1.1.1.2.1\">Own hands left</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.4.1.1.1.2.2\">8,704</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.4.1.1.1.2.3\">1,505</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.4.1.1.1.2.4\">3,834</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.4.1.1.1.2.5\">14,043</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.4.4.1.1.1.2.6\">30.3%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.4.1.1.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.3.1\">Own hands right</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.3.2\">8,447</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.3.3\">1,467</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.3.4\">3,670</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.3.5\">13,584</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.3.6\">29.3%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.4.1.1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.4.1\">Other hands left</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.4.2\">6,542</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.4.3\">1,079</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.4.4\">3,412</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.4.5\">11,033</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.4.6\">29.3%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.4.1.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.5.1\">Other hands right</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.5.2\">4,033</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.5.3\">867</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.5.4\">2,760</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.5.5\">7,660</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.4.4.1.1.1.5.6\">16.5%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.4.4.1.1.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.4.1.1.1.6.1\">Total</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.4.1.1.1.6.2\">27,726</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.4.1.1.1.6.3\">4,918</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.4.1.1.1.6.4\">13,676</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.4.1.1.1.6.5\">4,6320</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T3.4.4.1.1.1.6.6\">100%</td>\n</tr>\n</table>\n</span></div>\n</td>\n</tr>\n</table>\n</figure>",
66
+ "capture": "Table 3: The number of instances per category in each set and the category distribution in the EgoSurgery-Tool dataset."
67
+ },
68
+ "4": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T4.8.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S3.T4.9.2\" style=\"font-size:90%;\">Performance of object detection methods on the EgoSurgery-Tool. The best performance is shown in bold.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T4.6.6\">\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.7.1\">(a) Surgical tool detection performance.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S3.T4.3.3.3.3.3\" style=\"width:216.8pt;height:138.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-32.9pt,21.0pt) scale(0.767141508309333,0.767141508309333) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.3.3.3.3.3.3\">\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3.3.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.3.3.3.3.3.3.3.4\">Methods</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.1.1.1.1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.2.2.2.2.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.3.3.3.3.3.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3.3.3.3.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.3.3.3.3.3.3.4.1\">Faster R-CNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib14\" title=\"\">14</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.3.3.3.3.3.3.4.2\">37.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.3.3.3.3.3.3.4.3\">55.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.3.3.3.3.3.3.4.4\">43.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3.3.3.3.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.5.1\">RetinaNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib11\" title=\"\">11</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.5.2\">36.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.5.3\">53.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.5.4\">39.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3.3.3.3.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.6.1\">Cascade R-CNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib4\" title=\"\">4</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.6.2\">38.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.6.3\">55.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.6.4\">44.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3.3.3.3.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.7.1\">CenterNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib24\" title=\"\">24</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.7.2\">42.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.7.3\">60.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.7.4\">46.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3.3.3.3.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.8.1\">Sparse R-CNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib18\" title=\"\">18</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.8.2\">37.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.8.3\">55.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.8.4\">41.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3.3.3.3.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.9.1\">VarifocalNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib23\" title=\"\">23</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.3.3.3.3.3.3.9.2.1\">45.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.3.3.3.3.3.3.9.3.1\">63.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.3.3.3.3.3.3.9.4.1\">51.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3.3.3.3.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.10.1\">Deformable-DETR\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib25\" title=\"\">25</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.10.2\">30.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.10.3\">46.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.10.4\">34.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3.3.3.3.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.11.1\">DDQ\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib22\" title=\"\">22</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.11.2\">43.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.11.3\">59.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.3.3.3.3.3.11.4\">48.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3.3.3.3.3.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.3.3.3.3.3.3.12.1\">DINO\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib20\" title=\"\">20</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.3.3.3.3.3.3.12.2\">39.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.3.3.3.3.3.3.12.3\">56.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.3.3.3.3.3.3.12.4\">43.5</td>\n</tr>\n</table>\n</span></div>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.8.1\">(b) Hand detection performance.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S3.T4.6.6.6.3.3\" style=\"width:216.8pt;height:139.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-31.2pt,20.1pt) scale(0.776297437231644,0.776297437231644) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T4.6.6.6.3.3.3\">\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6.3.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.6.6.6.3.3.3.3.4\">Methods</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.4.4.4.1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.5.5.5.2.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T4.6.6.6.3.3.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6.3.3.3.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.6.6.6.3.3.3.4.1\">Faster R-CNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib14\" title=\"\">14</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.6.6.6.3.3.3.4.2\">55.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.6.6.6.3.3.3.4.3\">80.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.6.6.6.3.3.3.4.4\">62.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6.3.3.3.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.5.1\">RetinaNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib11\" title=\"\">11</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.5.2\">57.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.5.3\">81.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.5.4\">62.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6.3.3.3.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.6.1\">Cascade R-CNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib4\" title=\"\">4</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.6.2\">55.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.6.3\">80.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.6.4\">61.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6.3.3.3.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.7.1\">CenterNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib24\" title=\"\">24</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.7.2\">56.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.7.3\">78.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.7.4\">63.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6.3.3.3.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.8.1\">Sparse R-CNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib18\" title=\"\">18</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.8.2\">55.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.8.3\">78.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.8.4\">60.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6.3.3.3.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.9.1\">VarifocalNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib23\" title=\"\">23</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.6.6.6.3.3.3.9.2.1\">59.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.6.6.6.3.3.3.9.3.1\">82.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.9.4\">65.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6.3.3.3.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.10.1\">Deformable-DETR\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib25\" title=\"\">25</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.10.2\">54.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.10.3\">78.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.10.4\">59.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6.3.3.3.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.11.1\">DDQ\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib22\" title=\"\">22</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.11.2\">58.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.11.3\">73.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.6.6.6.3.3.3.11.4\">60.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.6.6.3.3.3.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.6.6.6.3.3.3.12.1\">DINO\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.03095v4#bib.bib20\" title=\"\">20</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.6.6.6.3.3.3.12.2\">58.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.6.6.6.3.3.3.12.3\">80.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.6.6.6.3.3.3.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.6.6.6.3.3.3.12.4.1\">65.6</span></td>\n</tr>\n</table>\n</span></div>\n</td>\n</tr>\n</table>\n</figure>",
70
+ "capture": "Table 4: Performance of object detection methods on the EgoSurgery-Tool. The best performance is shown in bold."
71
+ },
72
+ "5": {
73
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T5.4.1.1\" style=\"font-size:90%;\">Table 5</span>: </span><span class=\"ltx_text\" id=\"S3.T5.5.2\" style=\"font-size:90%;\">Left: Faster-RCNN hand detection performance comparison between the existing hand detection dataset, EgoHands, and our dataset. Right: Pretrained Faster-RCNN hand detection performance with fine-tuning on our dataset, separated by training order.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T5.2.2\" style=\"width:216.8pt;height:64.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-20.9pt,6.3pt) scale(0.83818162877007,0.83818162877007) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T5.2.2.2\">\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.2.2\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.1.1.1.1.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T5.1.1.1.1.1.1\">\n<tr class=\"ltx_tr\" id=\"S3.T5.1.1.1.1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T5.1.1.1.1.1.1.1.2\">Training data</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T5.1.1.1.1.1.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.1.1.1.1.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T5.1.1.1.1.1.1.2.1\">EgoHands</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T5.1.1.1.1.1.1.2.2\">8.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.1.1.1.1.1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T5.1.1.1.1.1.1.3.1\"><span class=\"ltx_text\" id=\"S3.T5.1.1.1.1.1.1.3.1.1\" style=\"background-color:#E6E6E6;\">Ours</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T5.1.1.1.1.1.1.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.1.1.1.1.1.1.3.2.1\" style=\"background-color:#E6E6E6;\">55.3</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.2.2.2.2.2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T5.2.2.2.2.2.1\">\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.2.2.2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T5.2.2.2.2.2.1.1.2\">Pre-training dataset</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T5.2.2.2.2.2.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.2.2.2.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T5.2.2.2.2.2.1.2.1\">ImageNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T5.2.2.2.2.2.1.2.2\">50.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.2.2.2.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.2.2.2.2.2.1.3.1\">COCO</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.2.2.2.2.2.1.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.2.2.2.2.2.1.3.2.1\">55.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.2.2.2.2.2.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T5.2.2.2.2.2.1.4.1\">COCO, EgoHands</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T5.2.2.2.2.2.1.4.2\">52.1</td>\n</tr>\n</table>\n</td>\n</tr>\n</table>\n</span></div>\n</figure>",
74
+ "capture": "Table 5: Left: Faster-RCNN hand detection performance comparison between the existing hand detection dataset, EgoHands, and our dataset. Right: Pretrained Faster-RCNN hand detection performance with fine-tuning on our dataset, separated by training order."
75
+ }
76
+ },
77
+ "image_paths": {
78
+ "2": {
79
+ "figure_path": "2406.03095v4_figure_2.png",
80
+ "caption": "Figure 2: The distribution of surgical tool categories.",
81
+ "url": "http://arxiv.org/html/2406.03095v4/x2.png"
82
+ },
83
+ "3": {
84
+ "figure_path": "2406.03095v4_figure_3.png",
85
+ "caption": "Figure 3: The distribution of hand categories.",
86
+ "url": "http://arxiv.org/html/2406.03095v4/x3.png"
87
+ },
88
+ "4": {
89
+ "figure_path": "2406.03095v4_figure_4.png",
90
+ "caption": "Figure 4: Co-occurrence matrix between surgical tools and surgical phases.",
91
+ "url": "http://arxiv.org/html/2406.03095v4/extracted/6028009/figs/co_occurence_phase.png"
92
+ },
93
+ "5": {
94
+ "figure_path": "2406.03095v4_figure_5.png",
95
+ "caption": "Figure 5: Qualitative results for the object detection challenge. The first column shows correct detections, while the second column shows incorrect cases.",
96
+ "url": "http://arxiv.org/html/2406.03095v4/x4.png"
97
+ },
98
+ "6": {
99
+ "figure_path": "2406.03095v4_figure_6.png",
100
+ "caption": "Figure 6: Confusion matrix of surgical tool detection model.",
101
+ "url": "http://arxiv.org/html/2406.03095v4/extracted/6028009/figs/confusion_matrix_tools.png"
102
+ }
103
+ },
104
+ "validation": true,
105
+ "references": [
106
+ {
107
+ "1": {
108
+ "title": "A semi-supervised Teacher-Student framework for surgical tool detection and localization.",
109
+ "author": "Mansoor Ali, Gilberto Ochoa-Ruiz, and Sharib Ali.",
110
+ "venue": "CMBBE, 2022.",
111
+ "url": null
112
+ }
113
+ },
114
+ {
115
+ "2": {
116
+ "title": "Hand detection using multiple proposals.",
117
+ "author": "Andrew Zisserman Arpit Mittal and Philip Torr.",
118
+ "venue": "In BMVC, 2011.",
119
+ "url": null
120
+ }
121
+ },
122
+ {
123
+ "3": {
124
+ "title": "Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions.",
125
+ "author": "Sven Bambach, Stefan Lee, David J. Crandall, and Chen Yu.",
126
+ "venue": "In ICCV, 2015.",
127
+ "url": null
128
+ }
129
+ },
130
+ {
131
+ "4": {
132
+ "title": "Cascade R-CNN: Delving Into High Quality Object Detection.",
133
+ "author": "Zhaowei Cai and Nuno Vasconcelos.",
134
+ "venue": "In CVPR, June 2018.",
135
+ "url": null
136
+ }
137
+ },
138
+ {
139
+ "5": {
140
+ "title": "MMDetection: Open mmlab detection toolbox and benchmark.",
141
+ "author": "Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin.",
142
+ "venue": "arXiv preprint arXiv:1906.07155, 2019.",
143
+ "url": null
144
+ }
145
+ },
146
+ {
147
+ "6": {
148
+ "title": "Analyzing Surgical Technique in Diverse Open Surgical Videos With Multitask Machine Learning.",
149
+ "author": "Goodman et al.",
150
+ "venue": "JAMA Surgery, 2024.",
151
+ "url": null
152
+ }
153
+ },
154
+ {
155
+ "7": {
156
+ "title": "Surgical Tool Detection in Open Surgery Videos.",
157
+ "author": "Ryo Fujii, Ryo Hachiuma, Hiroki Kajita, and Hideo Saito.",
158
+ "venue": "Applied Sciences, 2022.",
159
+ "url": null
160
+ }
161
+ },
162
+ {
163
+ "8": {
164
+ "title": "Weakly Semi-Supervised Tool Detection in Minimally Invasive Surgery Videos.",
165
+ "author": "Ryo Fujii, Ryo Hachiuma, and Hideo Saito.",
166
+ "venue": "In ICASSP, 2024.",
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "9": {
172
+ "title": "EgoSurgery-Phase: A Dataset of Surgical Phase Recognition from Egocentric Open Surgery Videos.",
173
+ "author": "Ryo Fujii, Masashi Hatano, Hideo Saito, and Hiroki Kajita.",
174
+ "venue": "In MICCAI, 2024.",
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "10": {
180
+ "title": "Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks.",
181
+ "author": "Amy Jin, Serena Yeung, Jeffrey Jopling, Jonathan Krause, Dan Azagury, Arnold Milstein, and Li Fei-Fei.",
182
+ "venue": "In WACV, 2018.",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "11": {
188
+ "title": "Focal loss for dense object detection.",
189
+ "author": "Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll\u00e1r.",
190
+ "venue": "In ICCV, 2017.",
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "12": {
196
+ "title": "Microsoft coco: Common objects in context.",
197
+ "author": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick.",
198
+ "venue": "In ECCV, 2014.",
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "13": {
204
+ "title": "M2cai surgical tool detection challenge report.",
205
+ "author": "Ashwin Raju, Heng Wang, and Junzhou Huang.",
206
+ "venue": "University of Texas at Arlington, Tech. Rep., 2016.",
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "14": {
212
+ "title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.",
213
+ "author": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.",
214
+ "venue": "In NeurIPS, 2015.",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "15": {
220
+ "title": "Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection.",
221
+ "author": "Duygu Sarikaya, Jason J. Corso, and Khurshid A. Guru.",
222
+ "venue": "T-MI, 2017.",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "16": {
228
+ "title": "Understanding Human Hands in Contact at Internet Scale.",
229
+ "author": "Dandan Shan, Jiaqi Geng, Michelle Shu, and David F. Fouhey.",
230
+ "venue": "In CVPR, 2020.",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "17": {
236
+ "title": "Real-time surgical tool detection in minimally invasive surgery based on attention-guided convolutional neural network.",
237
+ "author": "Pan Shi, Zijian Zhao, Sanyuan Hu, and Faliang Chang.",
238
+ "venue": "IEEE Access, 2020.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "18": {
244
+ "title": "Sparse R-CNN: End-to-End Object Detection With Learnable Proposals.",
245
+ "author": "Peize Sun, Rufeng Zhang, Yi Jiang, Tao Kong, Chenfeng Xu, Wei Zhan, Masayoshi Tomizuka, Lei Li, Zehuan Yuan, Changhu Wang, and Ping Luo.",
246
+ "venue": "In CVPR, 2021.",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "19": {
252
+ "title": "Weakly-supervised learning for tool localization in laparoscopic videos.",
253
+ "author": "Armine Vardazaryan, Didier Mutter, Jacques Marescaux, and Nicolas Padoy.",
254
+ "venue": "In MICCAI, 2018.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "20": {
260
+ "title": "DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection.",
261
+ "author": "Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel Ni, and Heung-Yeung Shum.",
262
+ "venue": "In ICLR, 2023.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "21": {
268
+ "title": "Using Computer Vision to Automate Hand Detection and Tracking of Surgeon Movements in Videos of Open Surgery.",
269
+ "author": "Michael Zhang, Xiaotian Cheng, Daniel Copeland, Arjun Desai, Melody Guan, Gabriel Brat, and Serena Yeung.",
270
+ "venue": "AMIA, 2021.",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "22": {
276
+ "title": "Dense Distinct Query for End-to-End Object Detection.",
277
+ "author": "Shilong Zhang, Xinjiang Wang, Jiaqi Wang, Jiangmiao Pang, Chengqi Lyu, Wenwei Zhang, Ping Luo, and Kai Chen.",
278
+ "venue": "In CVPR, 2023.",
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "23": {
284
+ "title": "Varifocalnet: An iou-aware dense object detector.",
285
+ "author": "Zhang, Haoyang and Wang, Ying and Dayoub, Feras and S\u00fcnderhauf, Niko.",
286
+ "venue": "In CVPR, 2021.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "24": {
292
+ "title": "Objects as Points.",
293
+ "author": "Xingyi Zhou, Dequan Wang, and Philipp Kr\u00e4henb\u00fchl.",
294
+ "venue": "In arXiv preprint arXiv:1904.07850, 2019.",
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "25": {
300
+ "title": "Deformable DETR: Deformable Transformers for End-to-End Object Detection.",
301
+ "author": "Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai.",
302
+ "venue": "In ICLR, 2021.",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "26": {
308
+ "title": "Surgical tool classification and localization: results and methods from the MICCAI 2022 SurgToolLoc challenge.",
309
+ "author": "Aneeq Zia, Kiran Bhattacharyya, Xi Liu, Max Berniker, Ziheng Wang, Rogerio Nespolo, Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa, Bo Liu, David Austin, Yiheng Wang, Michal Futrega, Jean-Francois Puget, Zhenqiang Li, Yoichi Sato, Ryo Fujii, Ryo Hachiuma, Mana Masuda, Hideo Saito, An Wang, Mengya Xu, Mobarakol Islam, Long Bai, Winnie Pang, Hongliang Ren, Chinedu Nwoye, Luca Sestini, Nicolas Padoy, Maximilian Nielsen, Samuel Sch\u00fcttler, Thilo Sentker, H\u00fcmeyra Husseini, Ivo Baltruschat, R\u00fcdiger Schmitz, Ren\u00e9 Werner, Aleksandr Matsun, Mugariya Farooq, Numan Saaed, Jose Renato Restom Viera, Mohammad Yaqub, Neil Getty, Fangfang Xia, Zixuan Zhao, Xiaotian Duan, Xing Yao, Ange Lou, Hao Yang, Jintong Han, Jack Noble, Jie Ying Wu, Tamer Abdulbaki Alshirbaji, Nour Aldeen Jalal, Herag Arabian, Ning Ding, Knut Moeller, Weiliang Chen, Quan He, Muhammad Bilal, Taofeek Akinosho, Adnan Qayyum, Massimo Caputo, Hunaid Vohra, Michael Loizou, Anuoluwapo Ajayi, Ilhem Berrou, Faatihah Niyi-Odumosu, Lena Maier-Hein, Danail\nStoyanov, Stefanie Speidel, and Anthony Jarc.",
310
+ "venue": "arXiv preprint arXiv:2305.07152, 2023.",
311
+ "url": null
312
+ }
313
+ }
314
+ ],
315
+ "url": "http://arxiv.org/html/2406.03095v4"
316
+ }
20241127/2406.14753v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2406.17995v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2406.19226v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2406.19540v2.json ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Weighted Circle Fusion: Ensembling Circle Representation from Different Object Detection Results",
3
+ "abstract": "Recently, the use of circle representation has emerged as a method to improve the identification of spherical objects (such as glomeruli, cells, and nuclei) in medical imaging studies. In traditional bounding box-based object detection, combining results from multiple models improves accuracy, especially when real-time processing isn\u2019t crucial. Unfortunately, this widely adopted strategy is not readily available for combining circle representations. In this paper, we propose Weighted Circle Fusion (WCF), a simple approach for merging predictions from various circle detection models. Our method leverages confidence scores associated with each proposed bounding circle to generate averaged circles. We evaluate our method on a proprietary dataset for glomerular detection in whole slide imaging (WSI) and find a performance gain of 5% compared to existing ensemble methods. Additionally, we assess the efficiency of two annotation methods\u2014fully manual annotation and a human-in-the-loop (HITL) approach\u2014in labeling 200,000 glomeruli. The HITL approach, which integrates machine learning detection with human verification, demonstrated remarkable improvements in annotation efficiency. The Weighted Circle Fusion technique not only enhances object detection precision but also notably reduces false detections, presenting a promising direction for future research and application in pathological image analysis. The source code has been made publicly available at https://github.com/hrlblab/WeightedCircleFusion",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "###figure_1### Object detection plays an essential role in medical imaging [1 ###reference_b1###], offering a wide range of applications that are enhanced by machine learning technologies. Traditional object detection models, such as Faster R-CNN [2 ###reference_b2###], YOLO [3 ###reference_b3###], and SSD [4 ###reference_b4###], have been widely adopted across various domains for their efficiency and accuracy [5 ###reference_b5###]. In medical object detection tasks, detecting glomeruli is essential for effective diagnosis and quantitative assessments in renal pathology. For these tasks, CircleNet [6 ###reference_b6###] stands out in the medical field for its unique approach to detection tasks. Unlike conventional detection networks that rely on bounding boxes, CircleNet offers a rotation-consistent circle representation with fewer parameters for ball-shaped objects [7 ###reference_b7###], such as glomeruli in kidney pathology (Fig. 1 ###reference_###). Despite CircleNet\u2019s advantages, relying on a single CircleNet-trained model for detection tasks presents considerable challenges, including missed and false detections [8 ###reference_b8###].\nTo enhance the robustness of object detection, ensemble learning algorithms, such as Non-Maximum Suppression (NMS) [9 ###reference_b9###], Soft-NMS [10 ###reference_b10###], and Weighted Box Fusion (WBF) [11 ###reference_b11###], have been proposed to fuse the detection results from multiple models (Fig. 1 ###reference_###). NMS and Soft-NMS work by eliminating lower confidence detections based on an Intersection Over Union (IOU) threshold [12 ###reference_b12###], with Soft-NMS adjusting detection scores rather than removing detections outright. WBF further refines this approach by merging overlapping detections, allowing those with higher confidence scores to improve the merged result. Unfortunately, such methods were optimized for traditional bounding box based representation for natural images.\nIn this paper, we propose a simple ensemble method, called Weighted Circle Fusion (WCF), designed specifically for circle representation in medical imaging detections. This method merges overlapping detections, with the fusion result\u2019s position decided by the confidence of the contributing detections. Importantly, it calculates the number of overlapped circles merged for each object, while computing the average score for false positive elimination. In experiments, we assessed the detection results of glomeruli on whole slide images (WSIs) using five-fold cross-validation. Additionally, to validate the method\u2019s consistency across rotations, we tested it on images rotated by 90 degrees. The results demonstrate the method\u2019s decent rotation consistency. To summarize, the contribution of this paper is threefold:\nThe WCF method, combined with a dual thresholds strategy, enhances precision and reliability by fusing detection results from circle representation and eliminating false positives based on confidence scores and overlap across hard decisions.\nOur method achieved a substantial performance gain ( 5% ) compared to the average results of individual models.\nUtilizing a human-in-the-loop (HITL) approach to test the time required to annotate 10 WSIs, showed that it saves 68.59% of total annotation time compared to complete manual annotation.\n###figure_2###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Methods",
15
+ "text": "In this section, we introduce an innovative method for fusing predictions: Weighted Circle Fusion (Fig. 2 ###reference_###). This technique is designed to enhance the accuracy of object detection, particularly focusing on circular objects commonly encountered in medical imaging, such as cells, glomeruli, or other spherically shaped features. Our approach involves pairwise fusion of the detection results from five models, where the results from the first model are fused with the second, then the combined results are fused with the third model, and so on until the fifth model is included.\nThe WCF process begins with aggregating predictions from multiple models, resulting in several sets of detection outcomes. Initially, the detection results from the first model are stored in a list, referred to as . Subsequent detections from other models are compared against the entries in list based on their cIOU [6 ###reference_b6###].The definition of cIOU can be found in the corresponding reference. If the cIOU between any two detections exceeds a predetermined threshold, indicating an enhanced agreement between models on the presence and location of an object, these detections are considered for fusion.\nUpon fusion of the two results, it is necessary to recalculate the coordinates and confidence score of the new, combined result. Given that our detection results are represented as circles, we utilize the circles\u2019 center coordinates and radii for computation. Suppose the center coordinates and radius of a result from the first set are\n(,) and with a confidence score ; and similarly, (,) and with score for a result from the second set. The formulas for calculating the weighted average coordinates and radius are as follows:\nFor center coordinates:\nFor radius:\nAfter calculating the fused coordinates, we compute the average of the scores of the merged results and keep track of how many detections have been merged to form this new result.\nIf a result from the second set cannot fuse with any result in list , it is directly added to . This process is repeated for each set of predictions until all m sets have been processed.\nUpon completing the fusion of all model predictions, the confidence score for the fused result is calculated as follows:\nwhere is the confidence score of each individual model\u2019s prediction.\nAdditionally, we apply a \u201ccount score\u201d to quantify how many model predictions have been fused into a single detection. The max value of depends on how many models we use in our ensemble method.\nTo further refine the detection outcomes, we introduced two thresholds: \u201cT count\u201d for the count value and \u201cT score\u201d for the average score of each result. Specifically, if both the count value and average score are below their respective thresholds, the detection result will be discarded. For the experiments in this paper, \u201dT count\u201d is set to 2 and \u201dT score\u201d is set to 0.9. This strategic approach enhances the precision of detection, making WCF particularly effective for instances where erroneous detections are common."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Experiments",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Data",
27
+ "text": "For our training dataset, we utilized an in-house dataset. This included 15,190 patches from whole slide images derived from renal biopsies. Additionally, we incorporated 9,260 patches from PAS-stained WSIs of murine kidneys. This dataset was divided into training, validation, and testing sets with a ratio of 7:1:2 for each of the five models.\nFor the training dataset for the plus version models, an additional 100,000 glomeruli were added to the basic training dataset used to train the base version of the model. These additional glomeruli were sourced from 170 WSI from our in-house dataset. The 100,000 glomeruli were divided into five groups of 40,000 glomeruli, with each group added to a different model. Each group of 40,000 glomeruli had a 20,000 overlap with the others. All patches in our training dataset were either cropped or resized to dimensions of 512 \u00d7 512 pixels. Each patch contained at least one glomerulus.\nTo evaluate the efficiency of different annotation methods for 200,000 glomeruli, we compared fully manual annotation with a human-in-the-loop (HITL) approach. The manual method involved human experts marking each glomerulus, whereas the HITL method integrated machine learning detection with human verification and correction. This comparison was conducted to assess the time efficiency and effectiveness of incorporating machine learning into the annotation process.\nFor the testing dataset, we included 15 PAS-stained WSIs, encompassing 2051 mouse glomeruli."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Experiment Setting",
33
+ "text": "The models were trained on the CircleNet architecture with a dla-34 backbone, using slightly varied datasets to enhance learning diversity and robustness. Training spanned 30 epochs for each model, and outputs were refined using the Non-Maximum Suppression algorithm.\nWe evaluated the efficiency of two annotation methods for 200,000 glomeruli in our KidneyPath dataset: fully manual annotation and a human-in-the-loop (HITL) approach. The manual method involved human experts marking each glomerulus, while the HITL method combined machine learning detection with human verification and correction. This comparison aimed to assess the time efficiency of integrating machine learning into the annotation process."
34
+ },
35
+ {
36
+ "section_id": "3.2.1",
37
+ "parent_section_id": "3.2",
38
+ "section_name": "3.2.1 Fusion Method Comparison Experiments",
39
+ "text": "In this part of the experiment, we compared three ensemble methods: NMS, Soft-NMS, and WCF, as well as the results from five models and their plus version.\nEach model was enhanced by the addition of 40,000 glomeruli training data, leading to improved performance. These 40,000 glomeruli were derived from an additional collection of 100,000 glomeruli, with a 20,000 overlap between each model.\nOur WCF method was configured with specific parameters: a circle Intersection Over Union (cIOU) threshold of 0.5. For the experiments in this paper, \u201dT count\u201d is set to 2 and \u201dT score\u201d is set to 0.9. Initially, the WCF algorithm was applied to the outputs refined by the NMS algorithm to combine the strengths of individual detections into a single, more accurate result. The effectiveness of the WCF-fused results was meticulously evaluated and compared against the performance of individual models, traditional NMS, and Soft-NMS, with cIOU thresholds set at 0.5 and 0.3, respectively."
40
+ },
41
+ {
42
+ "section_id": "3.2.2",
43
+ "parent_section_id": "3.2",
44
+ "section_name": "3.2.2 Rotational Consistency Experiments",
45
+ "text": "In this part, we delved into assessing the rotational consistency of their fusion method. This was achieved by extracting patches from Whole Slide Images and rotating them by 90 degrees prior to the detection process. The results from these rotated patches were then subjected to the same fusion process."
46
+ },
47
+ {
48
+ "section_id": "3.2.3",
49
+ "parent_section_id": "3.2",
50
+ "section_name": "3.2.3 Evaluation",
51
+ "text": "The models were evaluated based on the mean average precision (mAP) at IoU values of 0.5 and 0.75. Additionally, mAP was computed across a spectrum of IoU thresholds, thereby conducting a comprehensive assessment. This metric was calculated over a range of IoU thresholds, from 0.5 to 0.95 in steps of 0.05, at each step averaging the precision. Alongside precision, the average recall across these IoU thresholds was also measured, providing a rounded evaluation of model performance.\nThe IoU metric, a ratio reflecting the overlap between two objects versus their combined area, is traditionally calculated for bounding box representations. However, given that this study\u2019s predictions utilize circle representations, we adopted the circle IoU (cIoU) [13 ###reference_b13###] metric as our evaluation standard. The cIoU offers a more fitting measure for our circular detection outputs, aligning with the unique geometry of the objects being detected.\n###figure_3###"
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Results",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Performance on glomerular detection",
63
+ "text": "Fig. 3 ###reference_### and Table 1 ###reference_###showcase the performance of our fusion method, which integrates the outputs from five models and their enhanced version on murine glomerular WSIs. Averaged results are calculated from five original models and five enhanced models with 40000 additional global features, providing a comprehensive comparison across different fusion methods. The results demonstrate that our approach achieves remarkably higher mAP values and average recall rates. The enhanced models exhibit better average recall and average precision compared to the original models. Notably, the mAP obtained through our method surpasses that of any individual model included in the study. Although the average recall of our method is slightly lower compared to other fusion methods, it remains competitively high and exceeds the average recall of the five original models."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "Rotation consistency",
69
+ "text": "The study meticulously explores the rotation consistency of our object detection method, offering detailed insights in Table 2 ###reference_###. The results underscored the WCF method\u2019s notable consistency in rotation, highlighting its robustness against orientation changes. Our enhanced version of models also shows better rotation consistency compared to the original models."
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "Manual Annotation vs. Human-in-the-loop Annotation",
75
+ "text": "To evaluate the efficiency of manual annotation compared to a human-in-the-loop approach, we conducted a time analysis for annotating 10 WSIs. The results demonstrates that the HITL method considerably improves annotation efficiency, requiring an average of 2.9 minutes per image compared to 9.23 minutes per image for manual annotation."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusion",
81
+ "text": "This work is the first to ensemble detection results for circle representation. We introduced a novel ensemble method, Weighted Circle Fusion (WCF), to refine predictions from multiple deep learning models. WCF demonstrated superior precision metrics, outperforming conventional benchmarks, especially in high-error contexts. Our findings highlight WCF\u2019s potential in reducing errors in circle representation, making it a valuable strategy for medical image analysis using optimized deep learning approaches."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {
86
+ "1": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.1\" style=\"width:390.3pt;height:121.9pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-65.6pt,20.4pt) scale(0.748375656999809,0.748375656999809) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.1.1.1.1.1\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.2\">mAP(0.5:0.95)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.3\">mAP(@0.5IOU)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.4\">mAP(@0.75IOU)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.5\">Average Recall(0.5:0.95)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T1.1.1.2.1.1\">CircleNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.19540v2#bib.bib6\" title=\"\">6</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.1.1.2.1.2\">0.594</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.1.1.2.1.3\">0.784</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.1.1.2.1.4\">0.676</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.1.1.2.1.5\">0.605</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.3.2.1\">CircleNet+</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.3.2.2.1\">0.764</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.3.2.3.1\">0.899</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.3.2.4.1\">0.825</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.3.2.5.1\">0.738</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.1.4.3.1\">NMS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.19540v2#bib.bib9\" title=\"\">9</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.4.3.2\">0.463</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.4.3.3\">0.566</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.4.3.4\">0.516</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.4.3.5\">0.745</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.5.4.1\">NMS+</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.2\">0.644</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.3\">0.749</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.4\">0.696</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.5.4.5.1\" style=\"color:#D26446;\">0.834</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.6.5.1\">Soft-NMS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.19540v2#bib.bib10\" title=\"\">10</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.2\">0.319</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.3\">0.402</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.4\">0.357</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.5\">0.722</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.7.6.1\">Soft-NMS+</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.2\">0.419</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.3\">0.513</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.4\">0.452</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.7.6.5.1\" style=\"color:#6464E6;\">0.793</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.8.7.1\">WCF(Ours)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.8.7.2.1\" style=\"color:#6464E6;\">0.707</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.8.7.3.1\" style=\"color:#6464E6;\">0.907</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.8.7.4.1\" style=\"color:#6464E6;\">0.810</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.5\">0.629</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.1.1.9.8.1\">WCF+(Ours)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.1.9.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.9.8.2.1\" style=\"color:#D26446;\">0.829</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.1.9.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.9.8.3.1\" style=\"color:#D26446;\">0.955</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.1.9.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.9.8.4.1\" style=\"color:#D26446;\">0.905</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.1.1.9.8.5\">0.782</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>The table shows the averaged performance metrics of five original models (\u201dModels in fold\u201d) and their enhanced versions with 40,000 additional global features (\u201dModels+ in fold\u201d). Metrics include mean average precision (mAP) at various IoU thresholds and average recall, evaluated using NMS, soft-NMS, and WCF fusion methods. Results highlight the superior performance of the WCF method across models.</figcaption>\n</figure>",
88
+ "capture": "Table 1: The table shows the averaged performance metrics of five original models (\u201dModels in fold\u201d) and their enhanced versions with 40,000 additional global features (\u201dModels+ in fold\u201d). Metrics include mean average precision (mAP) at various IoU thresholds and average recall, evaluated using NMS, soft-NMS, and WCF fusion methods. Results highlight the superior performance of the WCF method across models."
89
+ },
90
+ "2": {
91
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T2.1.1.1.1\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.2\">mAP(0.5:0.95)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.3\">mAP(@0.5IOU)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.4\">mAP(@0.75IOU)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.5\">Average Recall(0.5:0.95)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T2.1.2.1.1\">CircleNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.19540v2#bib.bib6\" title=\"\">6</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.2.1.2\">0.728</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.2.1.3\">0.852</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.2.1.4\">0.826</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.1.2.1.5\">0.727</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.3.2.1\">CircleNet+</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.3.2.2.1\">0.775</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.3.2.3.1\">0.895</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.3.2.4.1\">0.876</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.3.2.5.1\">0.776</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.1.4.3.1\">NMS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.19540v2#bib.bib9\" title=\"\">9</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.3.2\">0.641</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.3.3\">0.776</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.3.4\">0.730</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.4.3.5\">0.636</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.5.4.1\">NMS+</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.2\">0.719</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.3\">0.828</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.4\">0.803</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.5\">0.717</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.6.5.1\">Soft-NMS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.19540v2#bib.bib10\" title=\"\">10</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.2\">0.570</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.3\">0.661</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.4\">0.635</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.5\">0.565</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.7.6.1\">Soft-NMS+</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.2\">0.616</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.3\">0.699</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.4\">0.686</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.5\">0.613</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.8.7.1\">WCF(Ours)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.7.2.1\" style=\"color:#6464E6;\">0.823</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.7.3.1\" style=\"color:#6464E6;\">0.924</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.7.4.1\" style=\"color:#6464E6;\">0.913</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.7.5.1\" style=\"color:#6464E6;\">0.817</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T2.1.9.8.1\">WCF+ (Ours)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.9.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.9.8.2.1\" style=\"color:#D26446;\">0.873</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.9.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.9.8.3.1\" style=\"color:#D26446;\">0.951</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.9.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.9.8.4.1\" style=\"color:#D26446;\">0.944</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.1.9.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.9.8.5.1\" style=\"color:#D26446;\">0.873</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance on rotation invariance: The chart displays the rotation invariance of various models and methods. From the results, we can see that the WCF method has achieved improvements in mean average precision and mean average recall. The results indicate that WCF possesses better rotation consistency.</figcaption>\n</figure>",
92
+ "capture": "Table 2: Performance on rotation invariance: The chart displays the rotation invariance of various models and methods. From the results, we can see that the WCF method has achieved improvements in mean average precision and mean average recall. The results indicate that WCF possesses better rotation consistency."
93
+ }
94
+ },
95
+ "image_paths": {
96
+ "1": {
97
+ "figure_path": "2406.19540v2_figure_1.png",
98
+ "caption": "Figure 1: Comparison of Box Fusion and Circle Fusion Methods for Object Detection. This figure delineates the differences between the ensemble results of box representation and circle representation. Box fusion alters the dimensions of the box, thereby changing its shape, while circle fusion only modifies the radius of the circle, preserving its shape. For the detection of medical ball-shaped objects, circle representation can achieve better performance.",
99
+ "url": "http://arxiv.org/html/2406.19540v2/x1.png"
100
+ },
101
+ "2": {
102
+ "figure_path": "2406.19540v2_figure_2.png",
103
+ "caption": "Figure 2: The workflow of the proposed Weighted Circle Fusion (WCF) method. This figure delineates the specific steps involved in our method. The core of the method lies in counting the number of fused circles and calculating their average score, which is then used to eliminate potential erroneous detections.",
104
+ "url": "http://arxiv.org/html/2406.19540v2/x2.png"
105
+ },
106
+ "3": {
107
+ "figure_path": "2406.19540v2_figure_3.png",
108
+ "caption": "Figure 3: Result Visualization. This figure presents the detection outcomes of glomeruli on WSIs using our method. The yellow arrows highlight false negatives identified by other models or methods, while the blue arrows indicate false positives. It is evident that traditional fusion methods such as NMS and soft-NMS tend to merge more erroneous predictions. In contrast, the WCF method achieves superior fusion results, with fewer incorrect predictions and the inclusion of detections that individual models failed to identify, demonstrating its effectiveness in enhancing detection accuracy.",
109
+ "url": "http://arxiv.org/html/2406.19540v2/x3.png"
110
+ }
111
+ },
112
+ "validation": true,
113
+ "references": [],
114
+ "url": "http://arxiv.org/html/2406.19540v2"
115
+ }
20241127/2407.03263v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2407.03297v2.json ADDED
@@ -0,0 +1,523 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Improved Noise Schedule for Diffusion Training",
3
+ "abstract": "Diffusion models have emerged as the de facto choice for generating high-quality visual signals across various domains.\nHowever, training a single model to predict noise across various levels poses significant challenges, necessitating numerous iterations and incurring significant computational costs.\nVarious approaches, such as loss weighting strategy design and architectural refinements, have been introduced to expedite convergence and improve model performance.\nIn this study, we propose a novel approach to design the noise schedule for enhancing the training of diffusion models. Our key insight is that the importance sampling of the logarithm of the Signal-to-Noise ratio (), theoretically equivalent to a modified noise schedule, is particularly beneficial for training efficiency when increasing the sample frequency around . This strategic sampling allows the model to focus on the critical transition point between signal dominance and noise dominance, potentially leading to more robust and accurate predictions.\nWe empirically demonstrate the superiority of our noise schedule over the standard cosine schedule.\nFurthermore, we highlight the advantages of our noise schedule design on the ImageNet benchmark, showing that the designed schedule consistently benefits different prediction targets.\nOur findings contribute to the ongoing efforts to optimize diffusion models, potentially paving the way for more efficient and effective training paradigms in the field of generative AI.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Diffusion models have emerged as a pivotal technique for generating high-quality visual signals across diverse domains, including image synthesis Ramesh et al. (2022 ###reference_b32###); Saharia et al. (2022 ###reference_b34###); Rombach et al. (2022 ###reference_b33###) , video generation Ho et al. (2022 ###reference_b17###); Singer et al. (2023 ###reference_b36###); Brooks et al. (2024 ###reference_b4###), and even 3D object generation Wang et al. (2022 ###reference_b39###); Nichol et al. (2022 ###reference_b28###).\nOne of the key strengths of diffusion models lies in their ability to approximate complex distributions, where Generative Adversarial Networks (GANs) may encounter difficulties.\nDespite the substantial computational resources and numerous training iterations required for convergence, improving the training efficiency of diffusion models is essential for their application in large-scale scenarios, such as high-resolution image synthesis and long video generation.\nRecent efforts to enhance diffusion model training efficiency have primarily focused on two directions.\n\nThe first approach centers on architectural improvements. For instance, the use of Adaptive Layer Normalization Gu et al. (2022 ###reference_b13###), when combined with zero initialization in the Transformer architecture Peebles & Xie (2023 ###reference_b30###), has shown promising results. MM-DiT Esser et al. (2024 ###reference_b10###) extends this approach to multi-modality by employing separate weights for vision and text processing. Similarly, U-shaped skip connections within Transformers Hoogeboom et al. (2023 ###reference_b18###); Bao et al. (2022 ###reference_b2###); Crowson et al. (2024 ###reference_b8###) and reengineered layer designs Karras et al. (2024 ###reference_b20###) have contributed to more efficient learning processes.\nThe second direction explores various loss weighting strategies to accelerate training convergence. Works such as eDiff-I Balaji et al. (2022 ###reference_b1###) and Ernie-ViLG 2.0 Feng et al. (2022 ###reference_b11###) address training difficulties across noise intensities using a Mixture of Experts approach. Other studies have investigated prioritizing specific noise levels Choi et al. (2022 ###reference_b7###) and reducing weights of noisy tasks Hang et al. (2023 ###reference_b14###) to enhance learning effectiveness. Recent developments include a softer weighting approach for high-resolution image synthesis Crowson et al. (2024 ###reference_b8###) and empirical findings on the importance of intermediate noise intensities Esser et al. (2024 ###reference_b10###).\nDespite these advances, the fundamental role of noise scheduling in diffusion model training remains underexplored.\nIn this study, we present a novel approach focusing on the fundamental role of noise scheduling, which is a function that determines how much noise is added to the input data at each timestep during the training process, controlling the distribution of noise levels that the neural network learns to remove.\nOur framework provides a unified perspective for analyzing noise schedules and importance sampling, leading to a straightforward method for designing noise schedules through the identification of curves in the distribution, as visualized in Figure 1 ###reference_###. Through empirical analysis, we discover that allocating more computation costs (FLOPs) to mid-range noise levels (around ) yields superior performance compared to increasing loss weights during the same period, particularly under constrained computational budgets.\nWe evaluate several different noise schedules, including Laplace, Cauchy, and the Cosine Shifted/Scaled variants, through comprehensive experiments using the ImageNet benchmark with a consistent training budget of 500K iterations (about 100 epochs). Our results, measured using the Fr\u00e9chet Inception Distance (FID) metric at both and resolutions, demonstrate that noise schedules with concentrated probability density around consistently outperform alternatives, with the Laplace schedule showing particularly favorable performance.\nThe key contributions of our work can be summarized as follows:\nA unified framework for analyzing and designing noise schedules in diffusion models, offering a more systematic approach to noise schedule optimization.\nEmpirical evidence demonstrating the superiority of mid-range noise level focus over loss weight adjustments for improving training efficiency.\nComprehensive evaluation and comparison of various noise schedules, providing practical guidelines for future research and applications in diffusion model training.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Method",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Preliminaries",
21
+ "text": "Diffusion models Ho et al. (2020 ###reference_b16###); Yang et al. (2021 ###reference_b40###) learn to generate data by iteratively reversing the diffusion process. We denote the distribution of data points as .\nThe diffusion process systematically introduces noise to the data in a progressive manner. In a continuous setting, the noisy data at timestep is defined as follows:\nwhere and are the coefficients of the adding noise process, essentially representing the noise schedule.\nFor the commonly used prediction target velocity: Salimans & Ho (2022 ###reference_b35###), the diffusion model is trained through the Mean Squared Error (MSE) loss:\nwhere is the loss weight, denotes the condition information.\nIn the context of class-conditional generation tasks, represents the class label.\nCommon practices sample from the uniform distribution . Kingma et al. (2021 ###reference_b22###) introduced the Signal-to-Noise ratio as to measure the noise level of different states.\nNotably, monotonically decreases with increasing .\nSome works represent the loss weight from the perspective of SNR Salimans & Ho (2022 ###reference_b35###); Hang et al. (2023 ###reference_b14###); Crowson et al. (2024 ###reference_b8###).\nTo simplify, we denote to indicate the noise intensities.\nIn the Variance Preserving (VP) setting, the coefficients in Equation 1 ###reference_### can be calculated by , .\nWhile these foundational concepts have enabled significant progress in diffusion models, the choice of noise schedule remains somewhat ad hoc. This motivates us to develop a more systematic framework for analyzing and designing noise schedules by examining them from a probability perspective."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Noise Schedule Design from A Probability Perspective",
27
+ "text": "The training process of diffusion models involves sampling timesteps from a uniform distribution. However, this uniform sampling in time actually implies a non-uniform sampling of noise intensities. We can formalize this relationship through the lens of importance sampling Bishop & Nasrabadi (2006 ###reference_b3###).\nSpecifically, when follows a uniform distribution, the sampling probability of noise intensity is given by:\nwhere the negative sign appears because monotonically decreases with .\nWe take cosine noise schedule Nichol & Dhariwal (2021 ###reference_b29###) as an example, where , .\nThen we can deduce that and .\nThus the distribution of is: .\nThis derivation illustrates the process of obtaining from a noise schedule . On the other hand, we can derive the noise schedule from the sampling probability of different noise intensities .\nBy integrating Equation 3 ###reference_###, we have:\nwhere represents the cumulative distribution function of . Thus we can obtain the noise schedule by applying the inverse function . In conclusion, during the training process, the importance sampling of varying noise intensities essentially equates to the modification of the noise schedules.\nTo illustrate this concept, let\u2019s consider the Laplace distribution as an example\n, we can derive the cumulative distribution function . Subsequently, we can obtain the inverse function to express the noise schedule in terms of : . Here, denotes the signum function, which equals 1 for positive inputs, for negative inputs.\nThe pseudo-code for implementing the Laplace schedule in the training of diffusion models is presented in A.1 ###reference_###.\nThis framework reveals that noise schedule design can be reframed as a probability distribution design problem. Rather than directly specifying how noise varies with time, we can instead focus on how to optimally distribute our sampling across different noise intensities.\nOur approach is also applicable to the recently popular flow matching with logit normal sampling scheme Esser et al. (2024 ###reference_b10###). Within our framework, we analyzed the distribution of its logSNR in A.4 ###reference_### and demonstrated its superiority over vanilla flow matching and cosine scheduling from the perspective of ."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Unified Formulation for Diffusion Training",
33
+ "text": "VDM++ Kingma & Gao (2023 ###reference_b23###) proposes a unified formulation that encompasses recent prominent frameworks and loss weighting strategies for training diffusion models, as detailed below:\nwhere signifies the training dataset, noise is drawn from a standard Gaussian distribution, and is the distribution of noise intensities.\nThis formulation provides a flexible framework that can accommodate various diffusion training strategies.\nDifferent predicting targets, such as and , can also be re-parameterized to -prediction.\n denotes the loss weighting strategy.\nAlthough adjusting is theoretically equivalent to altering .\nIn practical training, directly modifying to concentrate computational resources on training specific noise levels is more effective than enlarging the loss weight on specific noise levels.\nGiven these insights, our research focuses on how to design an optimal that can effectively allocate computational resources across different noise levels. By carefully crafting the distribution of noise intensities, we aim to improve the overall training process and the quality of the resulting diffusion models.\n\nWith the unified formulation providing a flexible framework for diffusion training, we can now apply these theoretical insights to practical settings. By carefully designing the distribution of noise intensities, we can optimize the training process and improve the performance of diffusion models in real-world applications. In the following section, we will explore practical strategies for noise schedules that leverage these insights to achieve better results."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Practical Settings",
39
+ "text": "Stable Diffusion 3 Esser et al. (2024 ###reference_b10###), EDM Karras et al. (2022 ###reference_b19###), and Min-SNR Hang et al. (2023 ###reference_b14###); Crowson et al. (2024 ###reference_b8###) find that the denoising tasks with medium noise intensity is most critical to the overall performance of diffusion models. Therefore, we increase the probability of when is of moderate size, and obtain a new noise schedule according to Section 2.2 ###reference_###.\nSpecifically, we investigate four novel noise strategies, named Cosine Shifted, Cosine Scaled, Cauchy, and Laplace respectively. The detailed setting are listed in Table 1 ###reference_###. Cosine Shifted use the hyperparameter to explore where the maximum probability should be used. Cosine Scaled explores how much the noise probability should be increased under the use of Cosine strategy to achieve better results. The Cauchy distribution, provides another form of function that can adjust both amplitude and offset simultaneously. The Laplace distribution is characterized by its mean and scale , controls both the magnitude of the probability and the degree of concentration of the distribution. These strategies contain several hyperparameters, which we will explore in Section 3.5 ###reference_###. Unless otherwise stated, we report the best hyperparameter results.\nBy re-allocating the computation resources at different noise intensities, we can train the complete denoising process.\nDuring sampling process,\nwe align the sampled SNRs as the cosine schedule to ensure a fair comparison.\nSpecifically, first we sample from uniform distribution , then get the corresponding SNRs from Cosine schedule: . According to Equation 5 ###reference_###, we get the corresponding by inverting these SNR values through the respective noise schedules. Finally, we use DDIM Song et al. (2021 ###reference_b37###) to sample with these new calculated .\nIt is important to note that, from the perspective of the noise schedule, how to allocate the computation resource during inference is also worth reconsideration. We will not explore it in this paper and leave this as future work."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Experiments",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "implementation Details",
51
+ "text": "Dataset. We conduct experiments on ImageNet Deng et al. (2009 ###reference_b9###) with and resolution.\nFor each image, we follow the preprocessing in Rombach et al. (2022 ###reference_b33###) to center crop and encode images to latents.\nThe resulting compressed latents have dimensions of for images and for images, effectively reducing the spatial dimensions while preserving essential visual information.\nNetwork Architecture.\nWe adopt DiT-B from Peebles & Xie (2023 ###reference_b30###) as our backbone.\nWe replace the last AdaLN Linear layer with vanilla linear.\nOthers are kept the same as the original implementation.\nThe patch size is set to 2 and the projected sequence length of is .\nThe class condition is injected through the adaptive layernorm.\nIn this study, our primary objective is to demonstrate the effectiveness of our proposed noise schedule compared to existing schedules under a fixed training budget, rather than to achieve state-of-the-art results. Consequently, we do not apply our method to extra-large (XL) scale models.\nTraining Settings.\nWe adopt the Adam optimizer Kingma & Ba (2014 ###reference_b21###) with constant learning rate .\nWe set the batch size to 256 following Peebles & Xie (2023 ###reference_b30###) and Gao et al. (2023 ###reference_b12###).\nEach model is trained for 500K iterations (about 100 epochs) if not specified. Our implementation is primarily based on OpenDiT Zhao et al. (2024 ###reference_b41###) and experiments are mainly conducted on 816G V100 GPUs.\nDifferent from the default discrete diffusion setting with linear noise schedule in the code base, we implement the diffusion process in a continuous way. Specifically, we sample from uniform distribution .\nBaselines and Metrics.\nWe compare our proposed noise schedule with several baseline settings in Table 2 ###reference_###. For each setting, we sample images using DDIM Song et al. (2021 ###reference_b37###) with 50 steps. Despite the noise strategy for different settings may be different, we ensure they share the same at each sampling step. This approach is adopted to exclusively investigate the impact of the noise strategy during the training phase. Moreover, we report results with different classifier-free guidance scalesHo & Salimans (2021 ###reference_b15###), and the FID is calculated using 10K generated images.\n\nWe sample with three CFG scales and select the optimal one to better evaluate the actual performance of different models."
52
+ },
53
+ {
54
+ "section_id": "3.2",
55
+ "parent_section_id": "3",
56
+ "section_name": "Comparison with baseline schedules and loss weight designs",
57
+ "text": "This section details the principal findings from our experiments on the ImageNet-256 dataset, focusing on the comparative effectiveness of various noise schedules and loss weightings in the context of CFG values. Table 3 ###reference_### illustrates these comparisons, showcasing the performance of each method in terms of the FID-10K score.\nThe experiments reveal that our proposed noise schedules, particularly Laplace, achieve the most notable improvements over the traditional cosine schedule, as indicated by the bolded best scores and the blue numbers representing the reductions compared to baseline\u2019s best score of 10.85.\nWe also provide a comparison with methods that adjust the loss weight, including Min-SNR and Soft-Min-SNR.\nUnless otherwise specified, the hyperparameter for both loss weighting schemes is set to 5.\nWe find that although these methods can achieve better results than the baseline, they are still not as effective as our method of modifying the noise schedule. This indicates that deciding where to allocate more computational resources is more efficient than adjusting the loss weight. Compared with other noise schedules like EDM Karras et al. (2022 ###reference_b19###) and Flow Matching Lipman et al. (2022 ###reference_b25###), we found that no matter which CFG value, our results significantly surpass theirs under the same training iterations.\nFurthermore, we investigate the convergence speed of these method, and the results are shown in Figure 2 ###reference_###. It can be seen that adjusting the noise schedule converges faster than adjusting the loss weight. Additionally, we also notice that the optimal training method may vary when using different CFG values for inference, but adjusting the noise schedule generally yields better results.\n###figure_2### ###figure_3###"
58
+ },
59
+ {
60
+ "section_id": "3.3",
61
+ "parent_section_id": "3",
62
+ "section_name": "Robustness on different predicting targets",
63
+ "text": "We evaluate the effectiveness of our designed noise schedule across three commonly adopted prediction targets: , , and .\nThe results are shown in Table 4 ###reference_###.\nWe observed that regardless of the prediction target, our proposed Laplace strategy significantly outperforms the Cosine strategy. It\u2019s noteworthy that as the Laplace strategy focuses the computation on medium noise levels during training, the extensive noise levels are less trained, which could potentially affect the overall performance. Therefore, we have slightly modified the inference strategy of DDIM to start sampling from ."
64
+ },
65
+ {
66
+ "section_id": "3.4",
67
+ "parent_section_id": "3",
68
+ "section_name": "Robustness on high resolution images",
69
+ "text": "To explore the robustness of the adjusted noise schedule to different resolutions, we also designed experiments on Imagenet-512. As pointed out by Chen (2023 ###reference_b6###), the adding noise strategy will cause more severe signal leakage as the resolution increases. Therefore, we need to adjust the hyperparameters of the noise schedule according to the resolution.\nSpecifically, the baseline Cosine schedule achieves the best performance when the CFG value equals to 3. So we choose this CFG value for inference.\nThrough systematic experimentation, we explored the appropriate values for the Laplace schedule\u2019s parameter , testing within the range {0.5, 0.75, 1.0}, and determined that was the most effective, resulting in an FID score of 9.09. This indicates that despite the need for hyperparameter tuning, adjusting the noise schedule can still stably bring performance improvements."
70
+ },
71
+ {
72
+ "section_id": "3.5",
73
+ "parent_section_id": "3",
74
+ "section_name": "Ablation Study",
75
+ "text": "We conduct an ablation study to analyze the impact of hyperparameters on various distributions of , which are enumerated below.\nLaplace distribution, known for its simplicity and exponential decay from the center, is straightforward to implement. We leverage its symmetric nature and adjust the scale parameter to center the peak at the middle timestep.\nWe conduct experiments with different Laplace distribution scales . The results are shown in Figure 3 ###reference_###. The baseline with standard cosine schedule achieves FID score of 17.79 with CFG=1.5, 10.85 with CFG=2.0, and 11.06 with CFG=3.0 after 500K iterations.\nWe can see that the model with Laplace distribution scale achieves the best performance 7.96 with CFG=3.0, which is relatively 26.6% better than the baseline.\n###figure_4### Cauchy distribution is another heavy-tailed distribution that can be used for noise schedule design. The distribution is not symmetric when the location parameter is not 0.\nWe conduct experiments with different Cauchy distribution parameters and the results are shown in Table 6 ###reference_###.\nCauchy(0, 0.5) means with .\nWe can see that the model with achieve better performance than the other two settings when fixing to 1.\nIt means that the model with more probability mass around performs better than others biased to negative or positive directions.\nCosine Shifted Hoogeboom et al. (2023 ###reference_b18###) is the shifted version of the standard cosine schedule.\nWe evaluate the schedules with both positive and negative values to comprehensively assess its impact on model performance.\nShifted with achieves FID-10k score with CFG .\nResults with shifted value are .\nComparatively, both scenarios demonstrate inferior performance relative to the baseline cosine schedule (). Additionally, by examining the data presented in Table 6 ###reference_###, we find concentrated on can best improve the results.\nCosine Scaled is also a modification of Cosine schedule. When , it becomes the standard Cosine version. means sampling more heavily around while means sampling more uniformly of all . We report related results in Table 7 ###reference_###.\nOur experimental results reveal a clear trend: larger values of consistently outperform the baseline, highlighting the benefits of focused sampling near .\nHowever, it\u2019s crucial to note that should not be excessively large and must remain within a valid range to maintain stable training dynamics.\nFor example, decreasing from 0.5 to 0.25 hurts the performance and cause the FID score to drop.\nStriking the right balance is key to optimizing performance.\nIn our experiments, a model trained with achieved a remarkable score of 8.04, representing a substantial improvement over the baseline.\nThe experiments with various noise schedules, including Laplace, Cauchy, Cosine Shifted, and Cosine Scaled, reveal a shared phenomenon: models perform better when the noise distribution or schedule is concentrated around . For the Laplace distribution, a scale of yielded the best performance, outperforming the baseline by 26.6%. In the case of the Cauchy distribution, models with a location parameter performed better than those with values biased towards negative or positive directions. The Cosine Shifted schedule showed inferior performance when shifted away from , while the Cosine Scaled schedule demonstrated that larger values of (sampling more heavily around ) consistently outperformed the baseline, with an optimal improvement of 25.9% at . This consistent trend suggests that focusing the noise distribution or schedule near is beneficial for model performance.\n\nWhile these different schedules take various mathematical forms, they all achieve similar optimal performance when given equivalent training budgets. The specific mathematical formulation is less crucial than the underlying design philosophy: increasing the sampling probability of intermediate noise levels. This principle provides a simple yet effective guideline for designing noise schedules."
76
+ },
77
+ {
78
+ "section_id": "4",
79
+ "parent_section_id": null,
80
+ "section_name": "Related Works",
81
+ "text": ""
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusion",
87
+ "text": "In this paper, we present a novel method for enhancing the training of diffusion models by strategically redefining the noise schedule. Our theoretical analysis demonstrates that this approach is equivalent to performing importance sampling on the noise. Empirical results show that our proposed Laplace noise schedule, which focuses computational resources on mid-range noise levels, yields superior performance compared to adjusting loss weights under constrained computational budgets. This study not only contributes significantly to the development of efficient training techniques for diffusion models but also offers promising potential for future large-scale applications."
88
+ }
89
+ ],
90
+ "appendix": [
91
+ {
92
+ "section_id": "Appendix 1",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix A Appendix",
95
+ "text": "We provide a simple PyTorch implementation for the Laplace noise schedule and its application in training. This example can be adapted to other noise schedules, such as the Cauchy distribution, by replacing the laplace_noise_schedule function. The model accepts noisy samples , timestep , and an optional condition tensor as inputs. This implementation supports prediction of .\nFor a Laplace distribution with location parameter and scale parameter , the probability density function (PDF) is given by:\nThe cumulative distribution function (CDF) can be derived as follows:\nTo obtain as a function of , we solve the inverse function:\nFor a Cauchy distribution with location parameter and scale parameter , the PDF is given by:\nThe corresponding CDF is:\nTo derive , we proceed as follows:\nSolving for , we obtain:\nWe observe that incorporating importance sampling of timesteps into the cosine schedule bears similarities to the Laplace schedule. Typically, the distribution of timestep is uniform . To increase the sampling frequency of middle-level timesteps, we propose modifying the sampling distribution to a simple polynomial function:\nwhere is the normalization factor ensuring that the cumulative distribution function (CDF) equals 1 at .\nTo sample from this distribution, we first sample uniformly from and then map it using the following function:\nWe incorporate the polynomial sampling of into the cosine schedule , whose inverse function is . Let us first consider the situation where :\nWe then derive the expression with respect to :\nConsidering symmetry, we obtain the final distribution with respect to as follows:\nWe visualize the schedule discussed above and compare it with Laplace schedule in Figure 4 ###reference_###.\nWe can see that for Laplace and for cosine-ply matches well.\nWe also conduct experiments on such schedule and present results in Table 8 ###reference_###.\nThey perform similar and both better than the standard cosine schedule.\nWe visualize the schedules discussed above and compare them with the Laplace schedule in Figure 4 ###reference_###. The results demonstrate that Laplace with and cosine-ply with exhibit a close correspondence. To evaluate the performance of these schedules, we conducted experiments and present the results in Table 8 ###reference_###. Both the Laplace and cosine-ply schedules show similar performance, and both outperform the standard cosine schedule.\n###figure_5### In Stable Diffusion 3 Esser et al. (2024 ###reference_b10###) and Movie Gen Polyak et al. (2024 ###reference_b31###), logit-normal sampling is applied to improve the training efficiency of flow models. To better understand this approach, we present a detailed derivation from the logit-normal distribution to the probability density function of logSNR .\nLet the Logit transformation of random variable follow a normal distribution:\nThen, the probability density function of is:\nwhere , and and are constants.\nConsider the variable transformation:\nOur goal is to find the probability density function of random variable .\nFirst, we solve for in terms of :\nNext, we calculate the Jacobian determinant :\nUsing the variable transformation formula:\nWe calculate :\nMultiplying by the Jacobian determinant:\nTherefore, the probability density function of is:\nThis shows that follows a normal distribution with mean and variance :\nThe mean and variance are:\nTo verify normalization, we integrate over its domain:\nThus, satisfies the normalization condition for probability density functions.\nWe compare the standard cosine scheudle Nichol & Dhariwal (2021 ###reference_b29###), Flow Matching Liu et al. (2022 ###reference_b26###); Lipman et al. (2022 ###reference_b25###), and Flow Matching with Logit-normal sampling Esser et al. (2024 ###reference_b10###); Polyak et al. (2024 ###reference_b31###).\nThe probability density functions of these schedules are visualized in Figure 5 ###reference_###.\nOur analysis reveals that Flow Matching with Logit-normal sampling concentrates more probability mass around compared to both the standard Cosine and Flow Matching schedules, resulting in improved training efficiency Esser et al. (2024 ###reference_b10###); Polyak et al. (2024 ###reference_b31###).\n###figure_6### To investigate the significance of training intervals, we conducted controlled experiments using a simplified setup. We divided the time range into four equal segments: . We first trained a base model over the complete range for 1M iterations, then fine-tuned it separately on each bin for 140k iterations to obtain four specialized checkpoints .\nFor evaluation, we designed experiments using both the base model and fine-tuned checkpoints . To assess the importance of each temporal segment, we selectively employed the corresponding fine-tuned checkpoint during its specific interval while maintaining the base model for remaining intervals. For example, when evaluating , we used within its designated interval and elsewhere.\nThe FID results across these four experimental configurations are presented in Figure 6 ###reference_###. Our analysis reveals that optimizing intermediate timesteps (bin1 and bin2) yields superior performance, suggesting the critical importance of these temporal regions in the diffusion process.\n###figure_7### We investigate the comparative effectiveness of our approach when applied as a noise schedule versus a loss weighting mechanism. We adopt Equation 21 ###reference_### as our primary noise schedule due to its foundation in the cosine schedule and demonstrated superior FID performance. To evaluate its versatility, we reformulate the importance sampling as a loss weighting strategy and compare it against established weighting schemes, including Min-SNR and Soft-Min-SNR.\nFigure 7 ###reference_### illustrates the loss weight derived from Cosine-Ply (=2) schedule alongside Min-SNR and Soft-Min-SNR.\nWe can observe that under the setting of predict target as , Min-SNR and Soft-Min-SNR can be seemed as putting more weight on intermediate levels, aligning with our earlier findings on the importance of middle-level noise densities.\n###figure_8### ImageNet, comprising over one million natural images, has been widely adopted as a benchmark dataset for validating improvements in diffusion models Peebles & Xie (2023 ###reference_b30###); Karras et al. (2024 ###reference_b20###).\nIn addition to ImageNet, we evaluate our approach on the CelebA Liu et al. (2015 ###reference_b27###) dataset ( resolution in pixel space), which consists of face images. We employ a DiT architecture (12 layers, embedding dimension of 512, 8 attention heads, and patch size of 4) using different noise schedules. This is an unconditional generation setting within a single domain. We present FID results as follows:\nWe also follow Stable Diffusion 3 Esser et al. (2024 ###reference_b10###), train on a more complicated dataset CC12M Changpinyo et al. (2021 ###reference_b5###) dataset (over 12M image-text pairs) and report the FID results here. We download the dataset using webdataset. We train a DiT-base model using CLIP as text conditioner. The images are cropped and resized to resolution, compressed to latents and trained for 200k iterations at batch size 256.\nOur method demonstrated strong generalization capabilities across both unconditional image generation using the CelebA dataset and text-to-image generation using the CC12M dataset.\nWe present addition visual results in Figure 8 ###reference_### to demonstrate the differences in generation quality between models trained with Cosine and our proposed Laplace schedule. Each case presents two rows of outputs, where the upper row shows results from the cosine schedule and the lower row displays results from our Laplace schedule. Each row contains five images corresponding to models trained for 100k, 200k, 300k, 400k, and 500k iterations, illustrating the progression of generation quality across different training stages.\nFor each case, the initial noise inputs are identical.\nAs shown in the results, our method achieves faster convergence in both basic object formation (at 100k iterations) and fine detail refinement, demonstrating superior learning efficiency throughout the training process.\n###figure_9###"
96
+ }
97
+ ],
98
+ "tables": {
99
+ "1": {
100
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S2.T1.2.2.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.2.3.1\">Noise Schedule</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.1.1.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.2.2.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.4.4.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\">Cosine</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.3.3.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.4.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.6.6.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\">Laplace</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.5.5.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.6.6.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.8.8.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\">Cauchy</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.7.7.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.8.8.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.10.10.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\">Cosine Shifted</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.9.9.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.10.10.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S2.T1.12.12.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\">Cosine Scaled</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.11.11.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.12.12.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>\nOverview of various Noise Schedules. The table categorizes them into five distinct types: Cosine, Laplace, Cauchy, and two variations of Cosine schedules. The second column denotes the sampling probability at different noise intensities . The last column indicates how to sample noise intensities for training. We derived their relationship in Equation\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03297v2#S2.E3\" title=\"In 2.2 Noise Schedule Design from A Probability Perspective \u2023 2 Method \u2023 Improved Noise Schedule for Diffusion Training\"><span class=\"ltx_text ltx_ref_tag\">3</span></a> and\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03297v2#S2.E5\" title=\"In 2.2 Noise Schedule Design from A Probability Perspective \u2023 2 Method \u2023 Improved Noise Schedule for Diffusion Training\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>.</figcaption>\n</figure>",
101
+ "capture": "Table 1: \nOverview of various Noise Schedules. The table categorizes them into five distinct types: Cosine, Laplace, Cauchy, and two variations of Cosine schedules. The second column denotes the sampling probability at different noise intensities . The last column indicates how to sample noise intensities for training. We derived their relationship in Equation\u00a03 and\u00a05."
102
+ },
103
+ "2": {
104
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.12\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T2.2.2.3\">Method</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.4.4.3\">Cosine</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.6.6.3\">Min-SNR\u00a0</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.6.6.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.8.8.3\">Soft-Min-SNR\u00a0</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.8.8.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.10.10.3\">FM-OT\u00a0</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.10.10.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S3.T2.12.12.3\">EDM\u00a0</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.12.12.2\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Comparison of different methods and related loss weighting strategies. The is introduced in Equation\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03297v2#S2.E6\" title=\"In 2.3 Unified Formulation for Diffusion Training \u2023 2 Method \u2023 Improved Noise Schedule for Diffusion Training\"><span class=\"ltx_text ltx_ref_tag\">6</span></a>.\nThe original for Soft-Min-SNR\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Crowson et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03297v2#bib.bib8\" title=\"\">2024</a>)</cite> was developed within the EDM\u2019s denoiser framework. In this study, we align it with the cosine schedule to ensure a fair comparison.</figcaption>\n</figure>",
105
+ "capture": "Table 2: Comparison of different methods and related loss weighting strategies. The is introduced in Equation\u00a06.\nThe original for Soft-Min-SNR\u00a0Crowson et\u00a0al. (2024) was developed within the EDM\u2019s denoiser framework. In this study, we align it with the cosine schedule to ensure a fair comparison."
106
+ },
107
+ "3": {
108
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T3.1.1.1.1\">Method</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.1.1.1.2\">CFG=1.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.1.1.1.3\">CFG=2.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.1.1.1.4\">CFG=3.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T3.1.2.2.1\">Cosine\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Nichol &amp; Dhariwal (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03297v2#bib.bib29\" title=\"\">2021</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.2.2.2\">17.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.2.2.3\">10.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.2.2.4\">11.06</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T3.1.3.3.1\">EDM\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Karras et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03297v2#bib.bib19\" title=\"\">2022</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.3.3.2\">26.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.3.3.3\">15.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.3.3.4\">11.56</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T3.1.4.4.1\">FM-OT\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Lipman et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03297v2#bib.bib25\" title=\"\">2022</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.4.4.2\">24.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.4.4.3\">14.66</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.4.4.4\">11.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T3.1.5.5.1\">Min-SNR\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Hang et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03297v2#bib.bib14\" title=\"\">2023</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.5.5.2\">16.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.5.5.3\">9.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.5.5.4\">10.43</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T3.1.6.6.1\">Soft-Min-SNR\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Crowson et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03297v2#bib.bib8\" title=\"\">2024</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.6.6.2\">14.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.6.6.3\">9.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.6.6.4\">10.66</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T3.1.7.7.1\">Cosine Shifted\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">Hoogeboom et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.03297v2#bib.bib18\" title=\"\">2023</a>)</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.7.7.2\">19.34</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.7.7.3\">11.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.1.7.7.4\">11.13</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T3.1.8.8.1\">Cosine Scaled</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.8.8.2\">12.74</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.8.8.3\">8.04</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.8.8.4\">11.02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T3.1.9.9.1\">Cauchy</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.9.9.2\">12.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.9.9.3\">8.14</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.1.9.9.4\">11.02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.1.10.10\" style=\"background-color:#BFBFBF;\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T3.1.10.10.1\"><span class=\"ltx_text\" id=\"S3.T3.1.10.10.1.1\" style=\"background-color:#BFBFBF;\">Laplace</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.1.10.10.2\"><span class=\"ltx_text\" id=\"S3.T3.1.10.10.2.1\" style=\"background-color:#BFBFBF;\">16.69</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.1.10.10.3\"><span class=\"ltx_text\" id=\"S3.T3.1.10.10.3.1\" style=\"background-color:#BFBFBF;\">9.04</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.1.10.10.4\">\n<span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S3.T3.1.10.10.4.1\" style=\"background-color:#BFBFBF;\">7.96</span><span class=\"ltx_text\" id=\"S3.T3.1.10.10.4.2\" style=\"background-color:#BFBFBF;\"> (<span class=\"ltx_text\" id=\"S3.T3.1.10.10.4.2.1\" style=\"color:#367DBD;\">-2.89</span>)</span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Comparison of various noise schedules and loss weightings on ImageNet-256, showing the performance (in terms of FID-10K) of different methods under different CFG values.\nThe best results highlighted in bold and the <span class=\"ltx_text\" id=\"S3.T3.3.1\" style=\"color:#367DBD;\">blue</span> numbers represent the improvement when compared with the baseline FID 10.85. The line in gray is our suggested noise schedule.</figcaption>\n</figure>",
109
+ "capture": "Table 3: Comparison of various noise schedules and loss weightings on ImageNet-256, showing the performance (in terms of FID-10K) of different methods under different CFG values.\nThe best results highlighted in bold and the blue numbers represent the improvement when compared with the baseline FID 10.85. The line in gray is our suggested noise schedule."
110
+ },
111
+ "4": {
112
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T4.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T4.3.4.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S3.T4.3.4.1.1\">Predict Target</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T4.3.4.1.2\">Noise Schedule</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T4.3.4.1.3\">100K</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T4.3.4.1.4\">200k</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T4.3.4.1.5\">300k</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T4.3.4.1.6\">400k</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T4.3.4.1.7\">500k</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T4.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.2\">Cosine</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.3\">35.20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.4\">17.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.5\">13.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.6\">11.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.1.7\">11.16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.5.1\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T4.3.5.1.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.5.1.2\">Laplace (Ours)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.5.1.3\">21.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.5.1.4\">10.86</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.5.1.5\">9.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.5.1.6\">8.73</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.5.1.7\">8.48</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T4.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.2\">Cosine</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.3\">25.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.4\">14.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.5\">11.78</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.6\">11.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.7\">11.06</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.6.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T4.3.6.2.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.6.2.2\">Laplace (Ours)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.6.2.3\">18.03</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.6.2.4\">9.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.6.2.5\">8.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.6.2.6\">8.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.3.6.2.7\">7.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T4.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.3.3.2\">Cosine</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.3.3.3\">28.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.3.3.4\">15.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.3.3.5\">12.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.3.3.6\">11.14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.3.3.7\">10.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.3.7.3\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T4.3.7.3.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.3.7.3.2\">Laplace (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.3.7.3.3\">27.98</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.3.7.3.4\">13.92</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.3.7.3.5\">11.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.3.7.3.6\">10.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.3.7.3.7\">9.53</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Effectiveness evaluated using FID-10K score on different predicting targets: , , and . The proposed <span class=\"ltx_text ltx_font_italic\" id=\"S3.T4.11.1\">Laplace</span> schedule performs better than the baseline Cosine schedule along with training iterations.</figcaption>\n</figure>",
113
+ "capture": "Table 4: Effectiveness evaluated using FID-10K score on different predicting targets: , , and . The proposed Laplace schedule performs better than the baseline Cosine schedule along with training iterations."
114
+ },
115
+ "5": {
116
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T5.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T5.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T5.1.1.1.1\">Noise Schedule</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T5.1.1.1.2\">Cosine</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T5.1.1.1.3\">Laplace</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T5.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T5.1.2.1.1\">FID-10K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T5.1.2.1.2\">11.91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S3.T5.1.2.1.3\">\n<span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S3.T5.1.2.1.3.1\">9.09</span> (<span class=\"ltx_text\" id=\"S3.T5.1.2.1.3.2\" style=\"color:#367DBD;\">-2.82</span>)</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>\nFID-10K results on ImageNet-512. All models are trained for 500K iterations.\n</figcaption>\n</figure>",
117
+ "capture": "Table 5: \nFID-10K results on ImageNet-512. All models are trained for 500K iterations.\n"
118
+ },
119
+ "6": {
120
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T6\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T6.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T6.1.1.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T6.1.1.1.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T6.1.1.1.2\">Cauchy(0, 0.5)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T6.1.1.1.3\">Cauchy(0, 1)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T6.1.1.1.4\">Cauchy(-1, 1)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T6.1.1.1.5\">Cauchy(1, 1)</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T6.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T6.1.2.2.1\">CFG=1.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T6.1.2.2.2\">12.91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T6.1.2.2.3\">14.32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T6.1.2.2.4\">18.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T6.1.2.2.5\">16.60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T6.1.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.3.3.1\">CFG=2.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.3.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T6.1.3.3.2.1\">8.14</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.3.3.3\">8.93</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.3.3.4\">10.38</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.3.3.5\">10.19</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T6.1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T6.1.4.4.1\">CFG=3.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T6.1.4.4.2\">11.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T6.1.4.4.3\">11.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T6.1.4.4.4\">10.81</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T6.1.4.4.5\">10.94</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>\nFID-10k results on ImageNet-256 with different Cauchy distribution parameters.\n</figcaption>\n</figure>",
121
+ "capture": "Table 6: \nFID-10k results on ImageNet-256 with different Cauchy distribution parameters.\n"
122
+ },
123
+ "7": {
124
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T7\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T7.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T7.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S3.T7.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T7.1.1.2\">1.3</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T7.1.1.3\">1.1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T7.1.1.4\">0.5</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T7.1.1.5\">0.25</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T7.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T7.1.2.1.1\">CFG=1.5</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T7.1.2.1.2\">39.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T7.1.2.1.3\">22.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T7.1.2.1.4\">12.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T7.1.2.1.5\">15.83</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T7.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T7.1.3.2.1\">CFG=2.0</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T7.1.3.2.2\">23.38</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T7.1.3.2.3\">12.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T7.1.3.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T7.1.3.2.4.1\">8.04</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T7.1.3.2.5\">8.64</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T7.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T7.1.4.3.1\">CFG=3.0</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T7.1.4.3.2\">13.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T7.1.4.3.3\">11.16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T7.1.4.3.4\">11.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T7.1.4.3.5\">8.26</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span>\nFID-10k results on ImageNet-256 with different scales of Cosine Scaled distribution.\n</figcaption>\n</figure>",
125
+ "capture": "Table 7: \nFID-10k results on ImageNet-256 with different scales of Cosine Scaled distribution.\n"
126
+ },
127
+ "8": {
128
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T8\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T8.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T8.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"A1.T8.2.3.1.1\">Iterations</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T8.2.3.1.2\">100,000</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T8.2.3.1.3\">200,000</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T8.2.3.1.4\">300,000</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T8.2.3.1.5\">400,000</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T8.2.3.1.6\">500,000</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T8.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A1.T8.1.1.1\">Cosine-ply ()</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A1.T8.1.1.2\">28.65</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A1.T8.1.1.3\">13.77</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A1.T8.1.1.4\">10.06</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A1.T8.1.1.5\">8.69</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A1.T8.1.1.6\">7.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T8.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"A1.T8.2.2.1\">Laplace ()</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A1.T8.2.2.2\">28.89</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A1.T8.2.2.3\">13.90</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A1.T8.2.2.4\">10.17</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A1.T8.2.2.5\">8.85</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A1.T8.2.2.6\">8.19</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 8: </span>Performance comparison of cosine-ply () and Laplace () schedules over different iteration counts</figcaption>\n</figure>",
129
+ "capture": "Table 8: Performance comparison of cosine-ply () and Laplace () schedules over different iteration counts"
130
+ },
131
+ "9": {
132
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T9\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T9.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T9.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"A1.T9.1.1.2\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T9.1.1.3\">Cosine</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T9.1.1.1\">Cosine-Ply (=2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T9.1.1.4\">Min-SNR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T9.1.1.5\">Soft-Min-SNR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T9.1.1.6\">Cosine-Ply as weight</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T9.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"A1.T9.1.2.1.1\">FID-10K</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T9.1.2.1.2\">10.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T9.1.2.1.3\">7.98</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T9.1.2.1.4\">9.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T9.1.2.1.5\">9.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T9.1.2.1.6\">8.88</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 9: </span>Quantitative comparison of different noise scheduling strategies and loss weighting schemes. Lower FID scores indicate better performance.</figcaption>\n</figure>",
133
+ "capture": "Table 9: Quantitative comparison of different noise scheduling strategies and loss weighting schemes. Lower FID scores indicate better performance."
134
+ },
135
+ "10": {
136
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T10\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T10.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T10.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A1.T10.1.1.1\">FID \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T10.1.1.2\">100k</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T10.1.1.3\">150k</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T10.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T10.1.2.1.1\">cosine</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T10.1.2.1.2\">10.0696</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T10.1.2.1.3\">7.93795</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T10.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A1.T10.1.3.2.1\">Laplace (ours)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T10.1.3.2.2\">7.93795</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T10.1.3.2.3\">6.58359</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 10: </span>FID scores on CelebA dataset at different training iterations</figcaption>\n</figure>",
137
+ "capture": "Table 10: FID scores on CelebA dataset at different training iterations"
138
+ },
139
+ "11": {
140
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T11\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T11.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T11.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A1.T11.1.1.1\">FID \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T11.1.1.2\">200k</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T11.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A1.T11.1.2.1.1\">cosine</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T11.1.2.1.2\">58.3619</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T11.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A1.T11.1.3.2.1\">Laplace (ours)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T11.1.3.2.2\">54.3492 (-4.0127)</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 11: </span>FID scores on CC12M dataset at 200k iterations</figcaption>\n</figure>",
141
+ "capture": "Table 11: FID scores on CC12M dataset at 200k iterations"
142
+ }
143
+ },
144
+ "image_paths": {
145
+ "1": {
146
+ "figure_path": "2407.03297v2_figure_1.png",
147
+ "caption": "Figure 1: \nIllustration of the probability density functions of different noise schedules.",
148
+ "url": "http://arxiv.org/html/2407.03297v2/x1.png"
149
+ },
150
+ "2(a)": {
151
+ "figure_path": "2407.03297v2_figure_2(a).png",
152
+ "caption": "Figure 2: Comparison between adjusting the noise schedule, adjusting the loss weights and baseline setting. The Laplace noise schedule yields the best results and the fastest convergence speed.",
153
+ "url": "http://arxiv.org/html/2407.03297v2/x2.png"
154
+ },
155
+ "2(b)": {
156
+ "figure_path": "2407.03297v2_figure_2(b).png",
157
+ "caption": "Figure 2: Comparison between adjusting the noise schedule, adjusting the loss weights and baseline setting. The Laplace noise schedule yields the best results and the fastest convergence speed.",
158
+ "url": "http://arxiv.org/html/2407.03297v2/x3.png"
159
+ },
160
+ "3": {
161
+ "figure_path": "2407.03297v2_figure_3.png",
162
+ "caption": "Figure 3: \nFID-10K results on ImageNet-256 with location parameter \u03bc\ud835\udf07\\muitalic_\u03bc fixed to 0 and different Laplace distribution scales b\ud835\udc4fbitalic_b in {0.25,0.5,1.0,2.0,3.0}0.250.51.02.03.0\\{0.25,0.5,1.0,2.0,3.0\\}{ 0.25 , 0.5 , 1.0 , 2.0 , 3.0 }. Baseline denotes standard cosine schedule.",
163
+ "url": "http://arxiv.org/html/2407.03297v2/x4.png"
164
+ },
165
+ "4": {
166
+ "figure_path": "2407.03297v2_figure_4.png",
167
+ "caption": "Figure 4: Visualization of p\u2062(\u03bb)\ud835\udc5d\ud835\udf06p(\\lambda)italic_p ( italic_\u03bb ) for Laplace schedule and cosine schedule with polynomial timestep sampling.",
168
+ "url": "http://arxiv.org/html/2407.03297v2/extracted/6029035/figs/cosine-ply.png"
169
+ },
170
+ "5": {
171
+ "figure_path": "2407.03297v2_figure_5.png",
172
+ "caption": "Figure 5: Comparison of probability density functions for different flow matching approaches. The plot shows three distributions: Flow Matching with Logit-Normal sampling (blue), Flow Matching without Logit-Normal sampling (green), and the Cosine schedule (orange).",
173
+ "url": "http://arxiv.org/html/2407.03297v2/x5.png"
174
+ },
175
+ "6": {
176
+ "figure_path": "2407.03297v2_figure_6.png",
177
+ "caption": "Figure 6: \nComparative analysis of interval-specific fine-tuning effects. When sampling within interval (14,24)1424\\left(\\frac{1}{4},\\frac{2}{4}\\right)( divide start_ARG 1 end_ARG start_ARG 4 end_ARG , divide start_ARG 2 end_ARG start_ARG 4 end_ARG ), \u201cBin1\u201d indicates the use of fine-tuned weights \ud835\udc261subscript\ud835\udc261\\mathbf{m}_{1}bold_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, while \ud835\udc0c\ud835\udc0c\\mathbf{M}bold_M is used for other intervals. \u201cBaseline\u201d represents the use of base model \ud835\udc0c\ud835\udc0c\\mathbf{M}bold_M throughout all intervals, and \u201cAll Tuned\u201d denotes the application of interval-specific fine-tuned models within their respective ranges.",
178
+ "url": "http://arxiv.org/html/2407.03297v2/extracted/6029035/figs/moe-fid.png"
179
+ },
180
+ "7": {
181
+ "figure_path": "2407.03297v2_figure_7.png",
182
+ "caption": "Figure 7: Visualization of different loss weight schemes.",
183
+ "url": "http://arxiv.org/html/2407.03297v2/x6.png"
184
+ },
185
+ "8": {
186
+ "figure_path": "2407.03297v2_figure_8.png",
187
+ "caption": "Figure 8: \nVisual comparison of results generated by model trained by cosine schedule and our proposed Laplace. For each case, the above row is generated by cosine schedule, the below is generated by Laplace. The 5 images from left to right represents the results generated by the model trained for 100k, 200k, 300k, 400k, and 500k iterations.",
188
+ "url": "http://arxiv.org/html/2407.03297v2/x7.png"
189
+ }
190
+ },
191
+ "validation": true,
192
+ "references": [
193
+ {
194
+ "1": {
195
+ "title": "ediff-i: Text-to-image diffusion models with ensemble of expert denoisers.",
196
+ "author": "Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, and Ming-Yu Liu.",
197
+ "venue": "arXiv preprint arXiv:2211.01324, 2022.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "2": {
203
+ "title": "All are worth words: A vit backbone for diffusion models.",
204
+ "author": "Fan Bao, Shen Nie, Kaiwen Xue, Yue Cao, Chongxuan Li, Hang Su, and Jun Zhu.",
205
+ "venue": "arXiv preprint arXiv:2209.12152, 2022.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "3": {
211
+ "title": "Pattern recognition and machine learning, volume 4.",
212
+ "author": "Christopher M Bishop and Nasser M Nasrabadi.",
213
+ "venue": "Springer, 2006.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "4": {
219
+ "title": "Video generation models as world simulators.",
220
+ "author": "Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh.",
221
+ "venue": "2024.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "5": {
227
+ "title": "Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts.",
228
+ "author": "Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut.",
229
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3558\u20133568, 2021.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "6": {
235
+ "title": "On the importance of noise scheduling for diffusion models.",
236
+ "author": "Ting Chen.",
237
+ "venue": "arXiv preprint arXiv:2301.10972, 2023.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "7": {
243
+ "title": "Perception prioritized training of diffusion models.",
244
+ "author": "Jooyoung Choi, Jungbeom Lee, Chaehun Shin, Sungwon Kim, Hyunwoo Kim, and Sungroh Yoon.",
245
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11472\u201311481, 2022.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "8": {
251
+ "title": "Scalable high-resolution pixel-space image synthesis with hourglass diffusion transformers.",
252
+ "author": "Katherine Crowson, Stefan Andreas Baumann, Alex Birch, Tanishq Mathew Abraham, Daniel Z Kaplan, and Enrico Shippole.",
253
+ "venue": "In Forty-first International Conference on Machine Learning, 2024.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "9": {
259
+ "title": "Imagenet: A large-scale hierarchical image database.",
260
+ "author": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.",
261
+ "venue": "In 2009 IEEE conference on computer vision and pattern recognition, pp. 248\u2013255. Ieee, 2009.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "10": {
267
+ "title": "Scaling rectified flow transformers for high-resolution image synthesis.",
268
+ "author": "Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M\u00fcller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al.",
269
+ "venue": "arXiv preprint arXiv:2403.03206, 2024.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "11": {
275
+ "title": "Ernie-vilg 2.0: Improving text-to-image diffusion model with knowledge-enhanced mixture-of-denoising-experts.",
276
+ "author": "Zhida Feng, Zhenyu Zhang, Xintong Yu, Yewei Fang, Lanxin Li, Xuyi Chen, Yuxiang Lu, Jiaxiang Liu, Weichong Yin, Shikun Feng, et al.",
277
+ "venue": "arXiv preprint arXiv:2210.15257, 2022.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "12": {
283
+ "title": "Masked diffusion transformer is a strong image synthesizer.",
284
+ "author": "Shanghua Gao, Pan Zhou, Ming-Ming Cheng, and Shuicheng Yan.",
285
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 23164\u201323173, 2023.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "13": {
291
+ "title": "Vector quantized diffusion model for text-to-image synthesis.",
292
+ "author": "Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, and Baining Guo.",
293
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10696\u201310706, 2022.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "14": {
299
+ "title": "Efficient diffusion training via min-snr weighting strategy.",
300
+ "author": "Tiankai Hang, Shuyang Gu, Chen Li, Jianmin Bao, Dong Chen, Han Hu, Xin Geng, and Baining Guo.",
301
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 7441\u20137451, October 2023.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "15": {
307
+ "title": "Classifier-free diffusion guidance.",
308
+ "author": "Jonathan Ho and Tim Salimans.",
309
+ "venue": "In NeurIPS 2021 Workshop on Deep Generative Models and Downstream Applications, 2021.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "16": {
315
+ "title": "Denoising diffusion probabilistic models.",
316
+ "author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel.",
317
+ "venue": "Advances in Neural Information Processing Systems, 33:6840\u20136851, 2020.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "17": {
323
+ "title": "Video diffusion models.",
324
+ "author": "Jonathan Ho, Tim Salimans, Alexey A. Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet.",
325
+ "venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "18": {
331
+ "title": "simple diffusion: End-to-end diffusion for high resolution images.",
332
+ "author": "Emiel Hoogeboom, Jonathan Heek, and Tim Salimans.",
333
+ "venue": "In International Conference on Machine Learning, pp. 13213\u201313232. PMLR, 2023.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "19": {
339
+ "title": "Elucidating the design space of diffusion-based generative models.",
340
+ "author": "Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine.",
341
+ "venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "20": {
347
+ "title": "Analyzing and improving the training dynamics of diffusion models.",
348
+ "author": "Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, and Samuli Laine.",
349
+ "venue": "In Proc. CVPR, 2024.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "21": {
355
+ "title": "Adam: A method for stochastic optimization.",
356
+ "author": "D. P. Kingma and J. Ba.",
357
+ "venue": "In International Conference on Learning Representations, 2014.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "22": {
363
+ "title": "Variational diffusion models.",
364
+ "author": "Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho.",
365
+ "venue": "Advances in neural information processing systems, 34:21696\u201321707, 2021.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "23": {
371
+ "title": "Understanding diffusion objectives as the ELBO with simple data augmentation.",
372
+ "author": "Diederik P Kingma and Ruiqi Gao.",
373
+ "venue": "In Thirty-seventh Conference on Neural Information Processing Systems, 2023.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "24": {
379
+ "title": "Common diffusion noise schedules and sample steps are flawed.",
380
+ "author": "Shanchuan Lin, Bingchen Liu, Jiashi Li, and Xiao Yang.",
381
+ "venue": "In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 5404\u20135411, 2024.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "25": {
387
+ "title": "Flow matching for generative modeling.",
388
+ "author": "Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matthew Le.",
389
+ "venue": "In The Eleventh International Conference on Learning Representations, 2022.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "26": {
395
+ "title": "Flow straight and fast: Learning to generate and transfer data with rectified flow.",
396
+ "author": "Xingchao Liu, Chengyue Gong, et al.",
397
+ "venue": "In The Eleventh International Conference on Learning Representations, 2022.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "27": {
403
+ "title": "Deep learning face attributes in the wild.",
404
+ "author": "Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.",
405
+ "venue": "In Proceedings of International Conference on Computer Vision (ICCV), December 2015.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "28": {
411
+ "title": "Point-e: A system for generating 3d point clouds from complex prompts.",
412
+ "author": "Alex Nichol, Heewoo Jun, Prafulla Dhariwal, Pamela Mishkin, and Mark Chen.",
413
+ "venue": "arXiv preprint arXiv:2212.08751, 2022.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "29": {
419
+ "title": "Improved denoising diffusion probabilistic models.",
420
+ "author": "Alexander Quinn Nichol and Prafulla Dhariwal.",
421
+ "venue": "In International Conference on Machine Learning, pp. 8162\u20138171. PMLR, 2021.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "30": {
427
+ "title": "Scalable diffusion models with transformers.",
428
+ "author": "William Peebles and Saining Xie.",
429
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4195\u20134205, 2023.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "31": {
435
+ "title": "Movie gen: A cast of media foundation models.",
436
+ "author": "Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, et al.",
437
+ "venue": "arXiv preprint arXiv:2410.13720, 2024.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "32": {
443
+ "title": "Hierarchical text-conditional image generation with clip latents.",
444
+ "author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.",
445
+ "venue": "arXiv preprint arXiv:2204.06125, 2022.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "33": {
451
+ "title": "High-resolution image synthesis with latent diffusion models.",
452
+ "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.",
453
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684\u201310695, 2022.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "34": {
459
+ "title": "Photorealistic text-to-image diffusion models with deep language understanding.",
460
+ "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo-Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi.",
461
+ "venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "35": {
467
+ "title": "Progressive distillation for fast sampling of diffusion models.",
468
+ "author": "Tim Salimans and Jonathan Ho.",
469
+ "venue": "In International Conference on Learning Representations, 2022.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "36": {
475
+ "title": "Make-a-video: Text-to-video generation without text-video data.",
476
+ "author": "Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal Gupta, and Yaniv Taigman.",
477
+ "venue": "In The Eleventh International Conference on Learning Representations, 2023.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "37": {
483
+ "title": "Denoising diffusion implicit models.",
484
+ "author": "Jiaming Song, Chenlin Meng, and Stefano Ermon.",
485
+ "venue": "In International Conference on Learning Representations, 2021.",
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "38": {
491
+ "title": "Volumediffusion: Flexible text-to-3d generation with efficient volumetric encoder.",
492
+ "author": "Zhicong Tang, Shuyang Gu, Chunyu Wang, Ting Zhang, Jianmin Bao, Dong Chen, and Baining Guo.",
493
+ "venue": "arXiv preprint arXiv:2312.11459, 2023.",
494
+ "url": null
495
+ }
496
+ },
497
+ {
498
+ "39": {
499
+ "title": "Rodin: A generative model for sculpting 3d digital avatars using diffusion.",
500
+ "author": "Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, et al.",
501
+ "venue": "arXiv preprint arXiv:2212.06135, 2022.",
502
+ "url": null
503
+ }
504
+ },
505
+ {
506
+ "40": {
507
+ "title": "Score-based generative modeling through stochastic differential equations.",
508
+ "author": "S. Yang, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole.",
509
+ "venue": "In International Conference on Learning Representations, 2021.",
510
+ "url": null
511
+ }
512
+ },
513
+ {
514
+ "41": {
515
+ "title": "Opendit: An easy, fast and memory-efficient system for dit training and inference.",
516
+ "author": "Xuanlei Zhao, Zhongkai Zhao, Ziming Liu, Haotian Zhou, Qianli Ma, and Yang You.",
517
+ "venue": "https://github.com/NUS-HPC-AI-Lab/OpenDiT, 2024.",
518
+ "url": null
519
+ }
520
+ }
521
+ ],
522
+ "url": "http://arxiv.org/html/2407.03297v2"
523
+ }
20241127/2407.04127v3.json ADDED
@@ -0,0 +1,609 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Biometric Authentication Based on Enhanced Remote Photoplethysmography Signal Morphology",
3
+ "abstract": "Remote photoplethysmography (rPPG) is a non-contact method for measuring cardiac signals from facial videos, offering a convenient alternative to contact photoplethysmography (cPPG) obtained from contact sensors. Recent studies have shown that each individual possesses a unique cPPG signal morphology that can be utilized as a biometric identifier, which has inspired us to utilize the morphology of rPPG signals extracted from facial videos for person authentication. Since the facial appearance and rPPG are mixed in the facial videos, we first de-identify facial videos to remove facial appearance while preserving the rPPG information, which protects facial privacy and guarantees that only rPPG is used for authentication. The de-identified videos are fed into an rPPG model to get the rPPG signal morphology for authentication. In the first training stage, unsupervised rPPG training is performed to get coarse rPPG signals. In the second training stage, an rPPG-cPPG hybrid training is performed by incorporating external cPPG datasets to achieve rPPG biometric authentication and enhance rPPG signal morphology. Our approach needs only de-identified facial videos with subject IDs to train rPPG authentication models. The experimental results demonstrate that rPPG signal morphology hidden in facial videos can be used for biometric authentication. The code is available at https://github.com/zhaodongsun/rppg_biometrics.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Facial videos contain invisible skin color changes induced by remote photoplethysmography (rPPG) signals, providing valuable cardiovascular information, such as heart rate. Similar to rPPG, contact photoplethysmography (cPPG) captures color changes in fingertips to monitor blood volume changes. cPPG signals, obtained using contact sensors, have been used for biometric authentication [12 ###reference_b12###, 11 ###reference_b11###]. Given the similar nature and measurement principles of rPPG and cPPG [26 ###reference_b26###], rPPG has the potential for biometric authentication. However, the feasibility of rPPG biometric authentication still needs to be validated. Hence, our research questions are: 1) Can rPPG signals be employed for biometric authentication? 2) If so, how can an rPPG-based biometric system be developed? 3) What are the advantages associated with utilizing rPPG biometrics?\n\n###figure_1### (a) rPPG Authentication System\n\n###figure_2### (b) rPPG Morphology Enhancement\nWe first examine the quality and discriminative power of rPPG signals. rPPG signals are derived from subtle changes in facial color caused by blood volume changes during heartbeats. Recent advances [49 ###reference_b49###, 20 ###reference_b20###] have achieved high-quality rPPG measurement, especially when the face has minimal or no movement. Hence, it is feasible to obtain high-quality rPPG signals. However, the question remains whether these high-quality rPPG signals contain subject-specific biometric characteristics. One work [32 ###reference_b32###] has tried using rPPG for biometrics, but the preliminary study was limited by a small-scale dataset and low-quality rPPG, offering inadequate authentication performance for practical applications.\nIn this paper, we propose an rPPG-based method for biometric authentication, as shown in Fig. 1 ###reference_###(a). Considering facial appearance and rPPG are mixed together in facial videos, we first de-identify facial videos while preserving the rPPG information. This step can guarantee that only rPPG information is used for biometric authentication while facial appearance cannot be used. In addition, this step can also conceal sensitive facial appearance information for privacy protection. The first module is the rPPG model that can extract rPPG signals from the de-identified facial videos. The second module is the rPPG-Authn model that utilizes the rPPG morphology to output person authentication results. We design a two-stage training strategy and rPPG-cPPG hybrid training by incorporating external cPPG datasets to exploit rPPG morphology for biometric authentication. Fig. 1 ###reference_###(b) illustrates the rPPG morphology enhancement. Note that we only use de-identified videos with subject IDs for rPPG biometrics.\nThere are several advantages of rPPG biometrics. Compared with facial appearances, the rPPG biometric system only utilizes de-identified facial videos, eliminating the need for sensitive facial appearance. Moreover, rPPG biometrics offers an additional degree of resistance to spoofing, as rPPG inherently serves as a countermeasure to presentation attacks [21 ###reference_b21###, 19 ###reference_b19###]. In contrast, without dedicated presentation attack detection (PAD) methods, conventional face recognition algorithms are vulnerable to presentation attacks and less secure than rPPG-based biometrics. Additionally, since both rPPG biometrics and face recognition use facial videos as data sources, combining both biometric modalities can potentially enhance both accuracy and security. When compared with cPPG biometrics, rPPG biometrics offers the advantages of being non-contact and only requiring off-the-shelf cameras, while cPPG biometrics necessitates specific contact sensors like pulse oximeters. Compared with iris recognition [46 ###reference_b46###, 5 ###reference_b5###] which requires iris scanners, rPPG biometrics only requires cheap RGB cameras and is robust to presentation attacks.\nOur contributions include:\nWe propose a new biometric authentication method based on rPPG. We utilize two-stage training to achieve rPPG morphology enhancement and accurate biometric authentication performance. We illustrate that utilizing de-identified facial videos is effective for rPPG biometric authentication and ensures the protection of facial appearance privacy.\nWe conduct comprehensive experiments on multiple datasets to validate the discriminative power of rPPG biometrics. We demonstrate that rPPG biometrics can achieve comparable performance with cPPG biometrics. We also investigate factors that may influence the performance of rPPG biometrics.\nWe discover that our rPPG-based biometric method can enhance rPPG morphology, which opens up possibilities for rPPG morphology learning from facial videos."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "rPPG Measurement",
21
+ "text": "[41 ###reference_b41###] initially proposed measuring rPPG from face videos via the green channel. Subsequent handcrafted methods have been introduced to enhance the quality of the rPPG signal [34 ###reference_b34###, 6 ###reference_b6###, 18 ###reference_b18###, 40 ###reference_b40###, 45 ###reference_b45###]. Recently, there has been rapid growth in deep learning (DL) approaches for rPPG measurement. Several studies [4 ###reference_b4###, 37 ###reference_b37###, 20 ###reference_b20###, 31 ###reference_b31###, 16 ###reference_b16###] utilize 2D convolutional neural networks (CNN) to input consecutive video frames for rPPG measurement. Another set of DL-based methods [28 ###reference_b28###, 29 ###reference_b29###, 23 ###reference_b23###, 24 ###reference_b24###, 7 ###reference_b7###] employ a spatial-temporal signal map obtained from different facial regions, which is then fed into 2DCNN models. 3DCNN-based methods [50 ###reference_b50###] and transformer-based methods [52 ###reference_b52###, 51 ###reference_b51###] have been proposed to enhance spatiotemporal performance and long-range spatiotemporal perception.\nAdditionally, multiple unsupervised rPPG methods [8 ###reference_b8###, 43 ###reference_b43###, 39 ###reference_b39###, 36 ###reference_b36###, 47 ###reference_b47###, 53 ###reference_b53###] have been proposed. Since GT signals are expensive to collect and synchronize in rPPG datasets, unsupervised rPPG methods only require facial videos for training without any GT signal and achieve performance similar to the supervised methods. However, most works on rPPG measurement primarily focus on the accuracy of heart rate estimation, while neglecting the rPPG morphology."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "cPPG-based Biometrics",
27
+ "text": "[10 ###reference_b10###] was the first attempt to utilize cPPG for biometric authentication. They extracted some fundamental morphological features, such as peak upward/downward slopes, for cPPG biometrics. Subsequently, other studies have explored additional morphological features, including cPPG derivatives [48 ###reference_b48###] and fiducial points [22 ###reference_b22###]. More recently, researchers have focused on employing DL methods to automatically extract morphological features. [25 ###reference_b25###, 2 ###reference_b2###, 15 ###reference_b15###] directly input cPPG signals into 1DCNN or long short-term memory (LSTM) architectures to conduct biometric authentication, while [12 ###reference_b12###, 11 ###reference_b11###] cut cPPG signals into periodic segments and utilize multiple representations of these periodic segments as inputs to a 1DCNN model. Furthermore, [12 ###reference_b12###] has collected datasets for cPPG biometrics and investigated the permanence of cPPG biometrics. There exists one preliminary work on rPPG biometrics [32 ###reference_b32###], but only a traditional independent component analysis (ICA) based method [34 ###reference_b34###] was applied for rPPG extraction, which yields low-quality rPPG morphology for biometric authentication."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Method",
33
+ "text": "Our method consists of facial video de-identification and two training stages. As the rPPG signal does not rely on facial appearance, we first de-identify the input video to avoid facial appearance being used by our method. In the first training stage, we perform unsupervised rPPG training on the de-identified videos to achieve basic rPPG signal measurement. In the second training stage, we use rPPG-cPPG hybrid training for biometric authentication and rPPG morphology enhancement."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Face De-identification for rPPG Biometrics",
39
+ "text": "We propose to de-identify facial videos using spatial downsampling and pixel permutation. This step aims to obfuscate facial appearances while preserving the rPPG information. Since rPPG signals are spatially redundant at different facial regions and largely independent of spatial information as shown by [40 ###reference_b40###, 27 ###reference_b27###], rPPG signals can be well preserved in this step while facial appearances are completely erased. The reasons for face de-identification are twofold. First, the facial appearance and rPPG information are intertwined in facial videos. We remove facial appearance to make sure that the biometric model performs recognition solely based on the rPPG information. Second, this step can remove facial appearances to protect facial privacy information during rPPG authentication.\n\n###figure_3### The facial video is de-identified as shown in Fig. 2 ###reference_###. Faces in the original videos are cropped using OpenFace [1 ###reference_b1###] by locating the boundary landmarks. The cropped facial video , where , , and are time length, height, and width, is downsampled by averaging the pixels in a sample region to get . It has been demonstrated that such downsampled facial videos are still effective in rPPG estimation [40 ###reference_b40###, 27 ###reference_b27###]. Since rPPG signal extraction does not largely depend on spatial information [40 ###reference_b40###], we further permutate the pixels to completely obfuscate the spatial information to get . Note that the permutation pattern is the same for each frame in a video but distinct for different videos. Since the spatial information is eliminated, we reshape the de-identified video into a spatiotemporal (ST) map for compact rPPG representation like [27 ###reference_b27###]."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "The 1st training stage: rPPG Unsupervised Pre-training",
45
+ "text": "This stage aims to train a basic rPPG model capable of extracting rPPG with precise heartbeats. We use unsupervised training to obtain the basic rPPG model. The main reasons for unsupervised training are: 1) Unsupervised rPPG training does not require GT PPG signals from contact sensors, which means only facial videos with subject IDs are required in our entire method. 2) The performance of unsupervised rPPG training [8 ###reference_b8###, 39 ###reference_b39###] is on par with supervised methods.\n\n###figure_4### We adopt and customize the unsupervised Contrast-Phys (CP) architecture [39 ###reference_b39###] to 2D ST-map inputs since CP can only use face videos as inputs. The modified method called Contrast-Phys-2D (CP2D) is shown in Fig. 3 ###reference_###. Two different ST maps from two different videos are the inputs of the rPPG model , where is 10 seconds. The rPPG model is based on a 2D convolutional neural network to output rPPG ST maps where rPPG signals are stacked vertically. Similar to CP, the spatial dimension is set as four. The architecture of the rPPG model is presented in the supplementary materials. Inspired by spatiotemporal rPPG sampling in CP, we use a patch with the shape to randomly get rPPG ST samples from rPPG ST maps , respectively. The rPPG ST samples are averaged along the spatial dimension to get rPPG samples and the corresponding power spectral densities (PSDs) . We use rPPG prior knowledge [39 ###reference_b39###] including rPPG spatiotemporal similarity and cross-video rPPG dissimilarity to make positive pairs ( or ) and negative pairs , which can be used in the positive and negative terms in the contrastive loss . The contrastive loss is used to pull together the PSDs originating from the same videos and push away the PSDs from different videos. The loss function is shown below. During inference, the rPPG ST map is averaged along the spatial dimension to get the rPPG signal .\n\n\n###figure_5### However, since CP2D does not utilize any prior knowledge about morphology, the resulting rPPG signals lack morphology information. Fig. 4 ###reference_### shows a GT cPPG signal and an rPPG signal produced by CP2D. CP2D generates an rPPG signal with accurate heartbeats that align with those of the cPPG signal. However, the morphological features, such as the dicrotic notch and diastolic peak evident in the cPPG morphology, are not clearly discernible in the rPPG signals. Since these morphological features play a crucial role in differentiating individuals, we aim to further refine the rPPG signal morphology at the second training stage."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "The 2nd training stage: rPPG-cPPG Hybrid Training",
51
+ "text": "At the second training stage, we further refine rPPG signals to obtain morphology information. Fig. 5 ###reference_### shows the rPPG-cPPG hybrid training, where the rPPG branch utilizes face videos and ID labels during training. On the other hand, the cPPG branch uses external cPPG biometric datasets to encourage the PPG-Morph model to learn morphology information, which can be incorporated into the rPPG branch through the PPG-Morph model . The PPG-Morph model comprises 1DCNN layers and transformer layers that extract morphological features from periodic segments. The two branches are trained alternately to facilitate the sharing of morphology information between the rPPG and cPPG branches. Note that our method only requires de-identified facial videos with subject IDs during training (enrollment) and only needs de-identified facial videos during inference.\n\n###figure_6###"
52
+ },
53
+ {
54
+ "section_id": "3.3.1",
55
+ "parent_section_id": "3.3",
56
+ "section_name": "3.3.1 rPPG Branch",
57
+ "text": "The rPPG branch can extract rPPG morphology and use it to differentiate individuals. This branch only requires a de-identified facial video and the ID label and does not need any GT cPPG signal for training. Therefore, de-identified facial videos with ID labels are sufficient for enrollment in the proposed rPPG biometrics scheme. The ST map derived from the de-identified facial video is fed into the pre-trained rPPG model to obtain the rPPG signal . Note that the rPPG model is the pre-trained model from the first unsupervised training stage. To segment the signal, the systolic peaks are located, and the signal is divided into K clips. Due to heart rate variability, the K clips may have different lengths, so the clip length is interpolated to 90 in order to obtain rPPG periodic segments. The choice of a length of 90 is based on the fact that the minimum heart rate (40 beats per minute) for a 60 Hz signal produces the longest periodic segment with a length of 90. Consequently, we obtain . To predict an authentication score for an individual, we use the PPG-Morph model and the rPPG classification head , which provides the rPPG morphology representation and ID probability , where is the number of individuals in the rPPG biometric dataset. The cross-entropy loss is used for ID classification, which is\nwhere is the predicted probability of the kth periodic segment belonging to the ID label ."
58
+ },
59
+ {
60
+ "section_id": "3.3.2",
61
+ "parent_section_id": "3.3",
62
+ "section_name": "3.3.2 cPPG Branch",
63
+ "text": "The cPPG branch utilizes external cPPG biometric datasets including Biosec2 [12 ###reference_b12###], BIDMC [33 ###reference_b33###, 9 ###reference_b9###], and PRRB [14 ###reference_b14###], to learn PPG morphology. Note that the external cPPG biometric datasets are available online and are not related to the facial videos in the rPPG branch. Similar to the rPPG branch, the cPPG signal is processed to obtain cPPG periodic segments . The PPG-Morph model and cPPG classification head are employed to generate the cPPG morphology representation and the ID probability prediction , where is the number of individuals in the external cPPG biometric datasets. Note that the PPG-Morph model is shared by both the rPPG branch and cPPG branch, allowing the cPPG branch to transfer the learned morphology information to the rPPG branch. The cross-entropy loss is utilized in this branch, which is\nwhere is the predicted probability of the kth periodic segment belonging to the ID label ."
64
+ },
65
+ {
66
+ "section_id": "3.3.3",
67
+ "parent_section_id": "3.3",
68
+ "section_name": "3.3.3 Alternate Backpropagation",
69
+ "text": "We alternately train the two branches and backpropagate the gradient of the two loss functions and to achieve rPPG-cPPG hybrid training. During the first step, de-identified facial videos and ID labels are sampled from the rPPG biometric dataset to calculate the loss , and the gradient of is backpropagated to update the rPPG model , the PPG-Morph model , and the rPPG classification head . During the second step, cPPG signals and ID labels are sampled from external cPPG biometric datasets to calculate the loss , and the gradient of is backpropagated to update PPG-Morph model and the cPPG classification head . These two steps are repeated in an alternating manner, allowing the two branches to be trained in turns. The cPPG branch uses external cPPG datasets to encourage the PPG-Morph model to learn morphology information. The morphology features learned from the cPPG branch can then be incorporated into the rPPG branch since the PPG-Morph model is shared by both cPPG and rPPG branches thus rPPG features are enhanced. The supplementary materials provide a detailed description of the algorithm."
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "Experiments",
75
+ "text": ""
76
+ },
77
+ {
78
+ "section_id": "4.1",
79
+ "parent_section_id": "4",
80
+ "section_name": "Implementation Details",
81
+ "text": "Datasets. We considered three public rPPG datasets, namely OBF [17 ###reference_b17###], PURE [38 ###reference_b38###], and UBFC-rPPG [3 ###reference_b3###]. The scales of these rPPG datasets are enough to validate the feasibility of rPPG biometrics since previous cPPG biometric datasets [12 ###reference_b12###, 11 ###reference_b11###] also have similar scales. These rPPG datasets consist of facial videos, GT cPPG signals, and ID labels, but our method does not require the GT cPPG. OBF dataset [17 ###reference_b17###] consists of data from 100 healthy subjects. Two 5-minute RGB facial videos were recorded for each participant. For each subject, the first facial video was recorded at rest, while the second was recorded after exercise. During the recording, participants remained seated without head or facial motions. Videos have a resolution of 1920\u00d71080 at 60 frames per second (fps). UBFC-rPPG dataset [3 ###reference_b3###] was captured using a webcam at a resolution of 640x480 at 30 fps. In each recording, the subject was positioned 1 meter away from the camera and playing a mathematical game, with the face centrally located within the video frame. The database consists of data from 42 participants, with each one having a 1-minute video. PURE dataset [38 ###reference_b38###] contains data from 10 subjects. Face videos for each subject were captured in 6 distinct scenarios: steady, talking, slow translation, fast translation, small rotation, and medium rotation, leading to a total of 60 one-minute RGB videos. Videos have a resolution of 640\u00d7480 at 30 fps.\nAdditionally, we combined the Biosec2 [12 ###reference_b12###], BIDMC [33 ###reference_b33###, 9 ###reference_b9###], and PRRB [14 ###reference_b14###] datasets to create the external cPPG biometric dataset. These datasets contain cPPG signals from 195 subjects for the cPPG branch in the rPPG-cPPG hybrid training. More details about datasets are provided in the supplementary materials.\nExperimental Setup. Our rPPG biometric experiments follow the previous cPPG biometric protocol [12 ###reference_b12###, 11 ###reference_b11###] where the training and test sets have the same persons but might be recorded in the same session (intra-session test) or recorded in different sessions (cross-session test). For the OBF dataset, we divide each pre-exercise video into three parts: the first 60% length is used for training, the following 20% length is used for validation, and the last 20% length is used for intra-session testing. The post-exercise videos are reserved for cross-session testing. As for the UBFC-rPPG dataset, the same division is applied to each video. Since each subject only contributes one video, only intra-session testing can be conducted on this dataset. Moving on to the PURE dataset, the same division is applied to each steady video. The videos involving head motion tasks are used exclusively for cross-session testing. At the first training stage, we select the best rPPG model with the lowest irrelevant power ratio (IPR) in the validation set, as conducted in [8 ###reference_b8###, 39 ###reference_b39###]. At the second training stage, we choose the best-performing models based on the lowest equal error rate (EER) in the validation set. Both training stages are carried out on a single Nvidia V100 GPU and employ the Adam optimizer with a learning rate of 1e-3. During inference, the predicted probabilities from consecutive periodic segments (5 beats, 10 beats, and 20 beats) are averaged.\nEvaluation Metrics. Since the model does multi-class classification, we use the one-vs-rest strategy to get the authentication results for each person. Therefore, each person has a binary classification. For each person, we can change the threshold of the model prediction output for that person to get the binary predictions, and we can plot false positive rates and true positive rates in a graph, which is the receiver operating characteristic (ROC) curve. Areas under curve (AUC) is the area under the ROC curve. If we change the threshold, we can find the threshold where the false positive rate and the false negative rate are equal. The EER is the false positive rate or false negative rate at this threshold. The final EER and AUC are averaged across all subjects. To evaluate the rPPG morphology, we calculate the Pearson correlation between the means of periodic segments from rPPG and the GT cPPG. More details are in the supplementary materials."
82
+ },
83
+ {
84
+ "section_id": "4.2",
85
+ "parent_section_id": "4",
86
+ "section_name": "Results and discussions",
87
+ "text": ""
88
+ },
89
+ {
90
+ "section_id": "4.2.1",
91
+ "parent_section_id": "4.2",
92
+ "section_name": "4.2.1 Results and discussions about rPPG authentication.",
93
+ "text": "Table 1 ###reference_### presents the results of rPPG authentication with varying signal lengths. The performance of rPPG authentication improves with longer signal lengths, such as 20 heartbeats, compared to shorter signal lengths like 10 or 5 beats. On all three datasets, the intra-session performance is satisfactory, with EERs below 1% and AUCs above 99%. However, the performance decreases during cross-session testing. On the OBF dataset, the cross-session (pre-exercise post-exercise) performance is slightly lower than the intra-session (pre-exercise pre-exercise) performance, but still achieves EER of 2.16%. On the PURE dataset, there is a significant drop in performance during cross-session (steady motion tasks) compared to intra-session (steady steady) due to the adverse impact of motion tasks on the quality of rPPG signals. Conversely, although the OBF dataset includes exercises to increase heart rates, it does not involve facial movements. This indicates that rPPG biometrics is sensitive to low-quality rPPG caused by facial motions but rPPG has reliable and unique biometric information evidenced by the varying heart rates from the same people. In practical usage, users will face the camera and keep still (like face recognition), thus such large intended head motions will not be a concern.\nThe observed rPPG periodic segments from different subjects (subject A-I) in Fig. 6 ###reference_### align with the aforementioned quantitative results. The rPPG periodic segments from the OBF dataset exhibit consistent morphology before and after exercises in Fig. 6 ###reference_###(a). Conversely, the motion tasks in the PURE dataset significantly alter morphology in Fig. 6 ###reference_###(c), resulting in noisy rPPG signals and a drop in performance during cross-session testing. Furthermore, the rPPG periodic segments from all three datasets display distinct morphologies for different subjects, highlighting the discriminative power of rPPG morphology. Fig. 7 ###reference_### shows the subject-specific biometric characteristics of rPPG morphology in detail. The rPPG periodic segments from two subjects have distinct fiducial points [22 ###reference_b22###] such as the systolic peaks, diastolic peaks, dicrotic notch, and onset/offset, which contain identity information.\n\n###figure_7### (a) rPPG periodic segments from OBF dataset\n\n###figure_8### (b) rPPG periodic segments from UBFC-rPPG dataset\n\n###figure_9### (c) rPPG periodic segments from PURE dataset\n###figure_10### Regarding fairness, prior studies [30 ###reference_b30###, 42 ###reference_b42###] highlighted skin bias in rPPG signal quality. Dark skin may yield lower-quality rPPG signals, impacting authentication performance. We assess authentication performance for light and dark skin groups in the OBF dataset with a 20-heartbeat signal length and cross-session testing. For light skin, EER and AUC are 2.52% and 97.79%, respectively. For dark skin, EER and AUC are 4.04% and 96.74%. The performance of dark skin slightly falls behind that of light skin, indicating a skin tone bias in rPPG biometrics. Addressing this fairness issue may involve collecting more data from dark-skinned people or developing new algorithms, which remains a topic for future research.\n: face recognition (FR), : cPPG biometrics, : rPPG biometrics, : Training does not converge."
94
+ },
95
+ {
96
+ "section_id": "4.2.2",
97
+ "parent_section_id": "4.2",
98
+ "section_name": "4.2.2 Comparison with other biometrics.",
99
+ "text": "In Table 2 ###reference_###, we compare rPPG biometrics with related biometric methods, including face and cPPG biometrics, when the signal length is 20 beats. For face recognition, we choose the highly cited face recognition method (FaceNet [35 ###reference_b35###]) to prove how general face recognition works on de-identified facial videos. We use FaceNet to extract embeddings from de-identified images and train two fully connected layers on the embeddings to get the classification results. Table 2 ###reference_### demonstrates that FaceNet [35 ###reference_b35###] fails to work on de-identified videos, indicating that there is no facial appearance information in the de-identified videos. Since our rPPG biometric method is privacy-preserving for facial appearances, we also compare our method with the recent privacy-preserving face recognition [13 ###reference_b13###]. The results show that our method can achieve better performance than privacy-preserving face recognition [13 ###reference_b13###]. Our rPPG biometric authentication completely gets rid of facial appearance while the privacy-preserving face recognition [13 ###reference_b13###] only adds noises to partially remove facial appearances to guarantee face recognition performance, which may still have risks of privacy leakage. In addition, we also compare our method w/ rPPG-cPPG hybrid training to our method w/ rPPG training (only rPPG branch is used for training in Fig. 5 ###reference_###, and the cPPG branch is disabled during training).\nOn the OBF dataset, ours w/ rPPG-cPPG hybrid training achieves similar intra-session performance to ours w/ rPPG training, but achieves the best cross-session performance. This means external cPPG datasets introducing morphology information can improve generalization, such as cross-session performance. Furthermore, our rPPG biometrics exhibits better performance than cPPG biometrics [11 ###reference_b11###]. This is primarily because rPPG signals are extracted from both spatial and temporal representations, allowing for the utilization of more information compared to cPPG signals, which are measured from a single spatial point in the temporal dimension. However, this holds true only when the rPPG signals are of high quality.\nOn the UBFC-rPPG dataset, ours w/ rPPG-cPPG hybrid training achieves 100% AUC but ours w/ rPPG training does not converge. The reason might be that it is difficult for the model to learn rPPG morphology from the small-scale UBFC-rPPG dataset without the help of the external cPPG dataset. This suggests that the external cPPG dataset can help the model to learn discriminative rPPG morphology information. Moreover, the performance of cPPG biometrics is still lower than that of our rPPG biometrics.\nOn the PURE dataset, both rPPG and cPPG biometrics demonstrate good performance in intra-session testing. However, in cross-session testing, our rPPG biometrics are surpassed by cPPG biometrics. This is likely due to significant facial motions in the test videos, which negatively impact the quality of rPPG signals and morphology, as shown in Figure 6 ###reference_###(c). On the other hand, cPPG signals measured from fingertips are less affected by facial motions, allowing for better performance in this scenario."
100
+ },
101
+ {
102
+ "section_id": "4.2.3",
103
+ "parent_section_id": "4.2",
104
+ "section_name": "4.2.3 Results and discussions about rPPG morphology.",
105
+ "text": "We also made an interesting finding that the rPPG-cPPG hybrid training can significantly improve rPPG morphology reconstruction. Table 3 ###reference_### shows the Pearson correlations between the mean periodic segments of GT cPPG and rPPG. High Pearson correlations mean rPPG morphology better resembles the corresponding GT cPPG. Note that our method does not require any GT cPPG for rPPG morphology reconstruction, so we choose unsupervised rPPG methods including POS [44 ###reference_b44###], ICA [34 ###reference_b34###], and [8 ###reference_b8###] for comparison. Ours w/ rPPG-cPPG hybrid training achieves significantly higher Pearson correlation than the baseline methods, CP2D, and ours w/ rPPG training, as the external cPPG datasets introduce helpful morphology information via the hybrid training to refine the rPPG morphology. Such cPPG datasets are publicly available, and thus do not introduce extra costs of data collection."
106
+ },
107
+ {
108
+ "section_id": "5",
109
+ "parent_section_id": null,
110
+ "section_name": "Conclusion",
111
+ "text": "In this paper, we validated the feasibility of rPPG biometrics from facial videos. We proposed a two-stage training scheme and novel cPPG-rPPG hybrid training by using external cPPG biometric datasets to improve rPPG biometric authentication. Our method achieves good performance on both rPPG biometrics authentication and rPPG morphology reconstruction. In addition, our method uses de-identified facial videos for authentication, which can protect sensitive facial appearance information. Future work will focus on collecting a large-scale rPPG biometric dataset and studying influencing factors like temporal stability, lighting, and recording devices."
112
+ }
113
+ ],
114
+ "appendix": [],
115
+ "tables": {
116
+ "1": {
117
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.2.2.3\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T1.2.2.3.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.2.3.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.2.2.3.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.2.2.3.1.1.1.1\">Signal length</span></span>\n</span></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S4.T1.2.2.2\">EER/AUC\n</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.6.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S4.T1.5.6.1.1\">OBF</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.6.1.2\">UBFC-rPPG</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S4.T1.5.6.1.3\">PURE</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.7.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.7.2.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.5.7.2.1.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.5.7.2.1.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.5.7.2.1.1.1.1\">intra-session</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.7.2.2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.5.7.2.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.5.7.2.2.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S4.T1.5.7.2.2.1.1.1\">cross-session</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.7.2.3\">intra-session</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.7.2.4\">intra-session</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.7.2.5\">cross-session</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.3.3.1\">20 heartbeats (20 sec)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.2.1\">0.17%/99.97%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.3.1\">2.16%/98.10%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.4.1\">0%/100%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.5.1\">0%/100%</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T1.3.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.3.6.1\">9.59%/93.70%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.4.4.1\">10 heartbeats (10 sec)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.2\">0.14%/99.98%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.3\">2.61%/98.04%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.4.1\">0%/100%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.5\">0.33%/99.67%</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.6\">14.00%/91.17%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T1.5.5.1\">5 heartbeats (5 sec)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.5.5.2\">0.33%/99.97%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.5.5.3\">3.81%/97.89%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.5.5.4\">0.01%/99.99%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.5.5.5\">0.58%/99.36%</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.5.5.6\">18.32%/86.81%</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.7.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S4.T1.8.2\" style=\"font-size:90%;\">EER and AUC for rPPG authentication on OBF, UBFC-rPPG, and PURE datasets.</span></figcaption>\n</figure>",
118
+ "capture": "Table 1: EER and AUC for rPPG authentication on OBF, UBFC-rPPG, and PURE datasets."
119
+ },
120
+ "2": {
121
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T2.9\" style=\"width:433.6pt;height:125.1pt;vertical-align:-8.6pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-84.6pt,22.7pt) scale(0.719268441762132,0.719268441762132) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.9.9\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T2.2.2.2.3\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T2.2.2.2.3.1\">Biometric Methods</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"5\" id=\"S4.T2.2.2.2.2\">EER/AUC\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.9.9.10.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S4.T2.9.9.10.1.1\">OBF</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.9.9.10.1.2\">UBFC-rPPG</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S4.T2.9.9.10.1.3\">PURE</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.9.9.11.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.9.9.11.2.1\">intra-sess</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.9.9.11.2.2\">cross-sess</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.9.9.11.2.3\">intra-sess</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.9.9.11.2.4\">intra-sess</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.9.9.11.2.5\">cross-sess</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.3.3.3.1\">FaceNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04127v3#bib.bib35\" title=\"\">35</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.2\">32.07%/65.87%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3\">36.58%/60.84%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.4\">36.15%/61.03%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.5\">31.67%/66.67%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.6\">35.67%/65.11%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.4.4.4.1\">Privacy-preserving FR <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04127v3#bib.bib13\" title=\"\">13</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.2\">6.46%/91.24%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.3\">6.52%/91.92%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.4\">7.26%/90.25%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.5\">6.88%/91.27%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.6\">7.82%/90.77%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.5.5.5.1\">Hwang2021 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04127v3#bib.bib11\" title=\"\">11</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.5.2\">1.21%/99.30%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.5.3\">16.72%/84.74%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.5.4\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.5.5.5.4.1\">6.30%/94.02%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.5.5.5.1\">0%/100%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.5.5.6.1\">4.23%/98.14%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.6.6.6.1\">Patil2018 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04127v3#bib.bib32\" title=\"\">32</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.2\">14.97%/89.42%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.3\">39.79%/62.14%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.4\">8.53%/88.70%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.5\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.6.6.6.5.1\">4.00%/92.00%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.6\">32.68%/72.11%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.7.7.7.1\">Ours w/ rPPG training\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.8.8.3.1\">0%/100%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.4\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.8.8.8.4.1\">3.23%/96.92%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.8.8.5.1\">0%/100%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.6\">11.68%/92.61%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.9.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T2.9.9.9.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.9.9.9.1.1\">Ours w/ rPPG-cPPG hybrid training</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.9.9.9.2\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.9.9.9.2.1\">0.17%/99.97%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.9.9.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.9.9.9.3.1\">2.16%/98.10%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.9.9.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.9.9.9.4.1\">0%/100%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.9.9.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.9.9.9.5.1\">0%/100%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.9.9.9.6\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"S4.T2.9.9.9.6.1\">9.59%/93.70%</span></td>\n</tr>\n</tbody>\n</table>\n<ul class=\"ltx_itemize\" id=\"S4.I1\">\n<li class=\"ltx_item\" id=\"S4.I1.i1\" style=\"list-style-type:none;\">\n<span class=\"ltx_tag ltx_tag_item\">\u2022</span>\n<div class=\"ltx_para\" id=\"S4.I1.i1.p1\">\n<p class=\"ltx_p\" id=\"S4.I1.i1.p1.4\">: face recognition (FR), : cPPG biometrics, : rPPG biometrics, : Training does not converge.</p>\n</div>\n</li>\n</ul>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T2.11.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S4.T2.12.2\" style=\"font-size:90%;\">Performance comparison between biometric methods including face recognition, cPPG biometrics, and rPPG biometrics. Note that de-identified videos proposed in the paper are used for face recognition and rPPG biometrics.</span></figcaption>\n</figure>",
122
+ "capture": "Table 2: Performance comparison between biometric methods including face recognition, cPPG biometrics, and rPPG biometrics. Note that de-identified videos proposed in the paper are used for face recognition and rPPG biometrics."
123
+ },
124
+ "3": {
125
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T3.4\" style=\"width:433.6pt;height:241.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(71.2pt,-39.6pt) scale(1.48853765474805,1.48853765474805) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.4.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.1.1.1.2\">Methods</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.1.1.1\">Pearson Correlations\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.2.2.2.1\">POS <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04127v3#bib.bib44\" title=\"\">44</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.2.2\">0.78</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.3.3.3.1\">ICA <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04127v3#bib.bib34\" title=\"\">34</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.2\">0.77</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.4.4.4.1\">Gideon2021 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04127v3#bib.bib8\" title=\"\">8</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.4.2\">0.77</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.5.1\" style=\"background-color:#EFEFEF;\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" colspan=\"2\" id=\"S4.T3.4.4.5.1.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T3.4.4.5.1.1.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.4.4.5.1.1.1.1\" style=\"background-color:#EFEFEF;\">After the 1st training stage</span></span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.6.2\" style=\"background-color:#EFEFEF;\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.4.4.6.2.1\"><span class=\"ltx_text\" id=\"S4.T3.4.4.6.2.1.1\" style=\"background-color:#EFEFEF;\">CP2D</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.6.2.2\"><span class=\"ltx_text\" id=\"S4.T3.4.4.6.2.2.1\" style=\"background-color:#EFEFEF;\">0.78</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.7.3\" style=\"background-color:#C0C0C0;\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" colspan=\"2\" id=\"S4.T3.4.4.7.3.1\" style=\"background-color:#C0C0C0;\"><span class=\"ltx_text\" id=\"S4.T3.4.4.7.3.1.1\" style=\"background-color:#C0C0C0;\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.4.4.7.3.1.1.1\" style=\"background-color:#C0C0C0;\">After the 1st and 2nd training stages</span></span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.8.4\" style=\"background-color:#C0C0C0;\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.4.4.8.4.1\"><span class=\"ltx_text\" id=\"S4.T3.4.4.8.4.1.1\" style=\"background-color:#C0C0C0;\">Ours w/ rPPG training</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.8.4.2\"><span class=\"ltx_text\" id=\"S4.T3.4.4.8.4.2.1\" style=\"background-color:#C0C0C0;\">0.70</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.9.5\" style=\"background-color:#C0C0C0;\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T3.4.4.9.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.9.5.1.1\" style=\"background-color:#C0C0C0;\">Ours w/ rPPG-cPPG hybrid training</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.4.4.9.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.9.5.2.1\" style=\"background-color:#C0C0C0;\">0.87</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T3.6.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S4.T3.7.2\" style=\"font-size:90%;\">Pearson correlations between GT cPPG periodic segments and the rPPG periodic segments.</span></figcaption>\n</figure>",
126
+ "capture": "Table 3: Pearson correlations between GT cPPG periodic segments and the rPPG periodic segments."
127
+ }
128
+ },
129
+ "image_paths": {
130
+ "1(a)": {
131
+ "figure_path": "2407.04127v3_figure_1(a).png",
132
+ "caption": "Figure 1: (a) rPPG Authentication System. (b) Our method can improve rPPG morphology information. The fiducial points [22] like the systolic peaks and diastolic peaks are the main subject-specific biometric characteristics in rPPG signals.",
133
+ "url": "http://arxiv.org/html/2407.04127v3/x1.png"
134
+ },
135
+ "1(b)": {
136
+ "figure_path": "2407.04127v3_figure_1(b).png",
137
+ "caption": "Figure 1: (a) rPPG Authentication System. (b) Our method can improve rPPG morphology information. The fiducial points [22] like the systolic peaks and diastolic peaks are the main subject-specific biometric characteristics in rPPG signals.",
138
+ "url": "http://arxiv.org/html/2407.04127v3/x2.png"
139
+ },
140
+ "2": {
141
+ "figure_path": "2407.04127v3_figure_2.png",
142
+ "caption": "Figure 2: Face de-identification for rPPG biometrics. The facial appearance is obfuscated while rPPG information is retained.",
143
+ "url": "http://arxiv.org/html/2407.04127v3/x3.png"
144
+ },
145
+ "3": {
146
+ "figure_path": "2407.04127v3_figure_3.png",
147
+ "caption": "Figure 3: The diagram of Contrast-Phys-2D (CP2D) for rPPG unsupervised pre-training based on contrastive learning.",
148
+ "url": "http://arxiv.org/html/2407.04127v3/x4.png"
149
+ },
150
+ "4": {
151
+ "figure_path": "2407.04127v3_figure_4.png",
152
+ "caption": "Figure 4: GT cPPG signal and rPPG signal extracted by CP2D. After the first training stage, the rPPG signal has accurate heartbeats but lacks morphology information.",
153
+ "url": "http://arxiv.org/html/2407.04127v3/x5.png"
154
+ },
155
+ "5": {
156
+ "figure_path": "2407.04127v3_figure_5.png",
157
+ "caption": "Figure 5: rPPG-cPPG hybrid training. The rPPG branch and cPPG branch are trained alternatively to utilize external cPPG signals to enhance the rPPG morphology fully.",
158
+ "url": "http://arxiv.org/html/2407.04127v3/x6.png"
159
+ },
160
+ "6(a)": {
161
+ "figure_path": "2407.04127v3_figure_6(a).png",
162
+ "caption": "Figure 6: rPPG periodic segments from (a) OBF dataset, (b) UBFC-rPPG dataset, and (c) PURE dataset. The red curves are the means of periodic segments.",
163
+ "url": "http://arxiv.org/html/2407.04127v3/x7.png"
164
+ },
165
+ "6(b)": {
166
+ "figure_path": "2407.04127v3_figure_6(b).png",
167
+ "caption": "Figure 6: rPPG periodic segments from (a) OBF dataset, (b) UBFC-rPPG dataset, and (c) PURE dataset. The red curves are the means of periodic segments.",
168
+ "url": "http://arxiv.org/html/2407.04127v3/x8.png"
169
+ },
170
+ "6(c)": {
171
+ "figure_path": "2407.04127v3_figure_6(c).png",
172
+ "caption": "Figure 6: rPPG periodic segments from (a) OBF dataset, (b) UBFC-rPPG dataset, and (c) PURE dataset. The red curves are the means of periodic segments.",
173
+ "url": "http://arxiv.org/html/2407.04127v3/x9.png"
174
+ },
175
+ "7": {
176
+ "figure_path": "2407.04127v3_figure_7.png",
177
+ "caption": "Figure 7: rPPG periodic segments and fiducial points from two subjects.",
178
+ "url": "http://arxiv.org/html/2407.04127v3/x10.png"
179
+ }
180
+ },
181
+ "validation": true,
182
+ "references": [
183
+ {
184
+ "1": {
185
+ "title": "Openface 2.0: Facial behavior analysis toolkit.",
186
+ "author": "T. Baltrusaitis, A. Zadeh, Y. C. Lim, and L.-P. Morency.",
187
+ "venue": "In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 59\u201366. IEEE, 2018.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "2": {
193
+ "title": "Cornet: Deep learning framework for ppg-based heart rate estimation and biometric identification in ambulant environment.",
194
+ "author": "D. Biswas, L. Everson, M. Liu, M. Panwar, B.-E. Verhoef, S. Patki, C. H. Kim, A. Acharyya, C. Van Hoof, M. Konijnenburg, et al.",
195
+ "venue": "IEEE transactions on biomedical circuits and systems, 2019.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "3": {
201
+ "title": "Unsupervised skin tissue segmentation for remote photoplethysmography.",
202
+ "author": "S. Bobbia, R. Macwan, Y. Benezeth, A. Mansouri, and J. Dubois.",
203
+ "venue": "Pattern Recognition Letters, 124:82\u201390, 2019.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "4": {
209
+ "title": "Deepphys: Video-based physiological measurement using convolutional attention networks.",
210
+ "author": "W. Chen and D. McDuff.",
211
+ "venue": "In ECCV, pages 349\u2013365, 2018.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "5": {
217
+ "title": "How iris recognition works.",
218
+ "author": "J. Daugman.",
219
+ "venue": "In The essential guide to image processing, pages 715\u2013739. Elsevier, 2009.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "6": {
225
+ "title": "Robust pulse rate from chrominance-based rppg.",
226
+ "author": "G. De Haan and V. Jeanne.",
227
+ "venue": "IEEE Transactions on Biomedical Engineering, 60(10):2878\u20132886, 2013.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "7": {
233
+ "title": "Dual-bridging with adversarial noise generation for domain adaptive rppg estimation.",
234
+ "author": "J. Du, S.-Q. Liu, B. Zhang, and P. C. Yuen.",
235
+ "venue": "In CVPR, 2023.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "8": {
241
+ "title": "The way to my heart is through contrastive learning: Remote photoplethysmography from unlabelled video.",
242
+ "author": "J. Gideon and S. Stent.",
243
+ "venue": "In ICCV, pages 3995\u20134004, 2021.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "9": {
249
+ "title": "Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals.",
250
+ "author": "A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley.",
251
+ "venue": "circulation, 2000.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "10": {
257
+ "title": "A novel biometric approach in human verification by photoplethysmographic signals.",
258
+ "author": "Y. Gu, Y. Zhang, and Y. Zhang.",
259
+ "venue": "In 4th International IEEE EMBS Special Topic Conference on Information Technology Applications in Biomedicine, 2003. IEEE, 2003.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "11": {
265
+ "title": "Variation-stable fusion for ppg-based biometric system.",
266
+ "author": "D. Y. Hwang, B. Taha, and D. Hatzinakos.",
267
+ "venue": "In ICASSP. IEEE, 2021.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "12": {
273
+ "title": "Evaluation of the time stability and uniqueness in ppg-based biometric system.",
274
+ "author": "D. Y. Hwang, B. Taha, D. S. Lee, and D. Hatzinakos.",
275
+ "venue": "IEEE Transactions on Information Forensics and Security, 2020.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "13": {
281
+ "title": "Privacy-preserving face recognition with learnable privacy budgets in frequency domain.",
282
+ "author": "J. Ji, H. Wang, Y. Huang, J. Wu, X. Xu, S. Ding, S. Zhang, L. Cao, and R. Ji.",
283
+ "venue": "In European Conference on Computer Vision, pages 475\u2013491. Springer, 2022.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "14": {
289
+ "title": "Multiparameter respiratory rate estimation from the photoplethysmogram.",
290
+ "author": "W. Karlen, S. Raman, J. M. Ansermino, and G. A. Dumont.",
291
+ "venue": "IEEE Transactions on Biomedical Engineering, 2013.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "15": {
297
+ "title": "Cross-domain adaptation for biometric identification using photoplethysmogram.",
298
+ "author": "E. Lee, A. Ho, Y.-T. Wang, C.-H. Huang, and C.-Y. Lee.",
299
+ "venue": "In ICASSP. IEEE, 2020.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "16": {
305
+ "title": "Learning motion-robust remote photoplethysmography through arbitrary resolution videos.",
306
+ "author": "J. Li, Z. Yu, and J. Shi.",
307
+ "venue": "In AAAI, 2023.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "17": {
313
+ "title": "The obf database: A large face video database for remote physiological signal measurement and atrial fibrillation detection.",
314
+ "author": "X. Li, I. Alikhani, J. Shi, T. Seppanen, J. Junttila, K. Majamaa-Voltti, M. Tulppo, and G. Zhao.",
315
+ "venue": "In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pages 242\u2013249. IEEE, 2018.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "18": {
321
+ "title": "Remote heart rate measurement from face videos under realistic situations.",
322
+ "author": "X. Li, J. Chen, G. Zhao, and M. Pietikainen.",
323
+ "venue": "In CVPR, pages 4264\u20134271, 2014.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "19": {
329
+ "title": "Learning temporal similarity of remote photoplethysmography for fast 3d mask face presentation attack detection.",
330
+ "author": "S.-Q. Liu, X. Lan, and P. C. Yuen.",
331
+ "venue": "IEEE Transactions on Information Forensics and Security, 2022.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "20": {
337
+ "title": "Multi-task temporal shift attention networks for on-device contactless vitals measurement.",
338
+ "author": "X. Liu, J. Fromm, S. Patel, and D. McDuff.",
339
+ "venue": "In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, NeurIPS, volume 33, pages 19400\u201319411, 2020.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "21": {
345
+ "title": "Learning deep models for face anti-spoofing: Binary or auxiliary supervision.",
346
+ "author": "Y. Liu, A. Jourabloo, and X. Liu.",
347
+ "venue": "In CVPR, 2018.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "22": {
353
+ "title": "Seeing red: Ppg biometrics using smartphone cameras.",
354
+ "author": "G. Lovisotto, H. Turner, S. Eberz, and I. Martinovic.",
355
+ "venue": "In CVPRW, 2020.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "23": {
361
+ "title": "Dual-gan: Joint bvp and noise modeling for remote physiological measurement.",
362
+ "author": "H. Lu, H. Han, and S. K. Zhou.",
363
+ "venue": "In CVPR, pages 12404\u201312413, 2021.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "24": {
369
+ "title": "Neuron structure modeling for generalizable remote physiological measurement.",
370
+ "author": "H. Lu, Z. Yu, X. Niu, and Y.-C. Chen.",
371
+ "venue": "In CVPR, pages 18589\u201318599, 2023.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "25": {
377
+ "title": "End-to-end photopleth ysmography (ppg) based biometric authentication by using convolutional neural networks.",
378
+ "author": "J. Luque, G. Cortes, C. Segura, A. Maravilla, J. Esteban, and J. Fabregat.",
379
+ "venue": "In 2018 26th European Signal Processing Conference (EUSIPCO). IEEE, 2018.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "26": {
385
+ "title": "Remote detection of photoplethysmographic systolic and diastolic peaks using a digital camera.",
386
+ "author": "D. McDuff, S. Gontarek, and R. W. Picard.",
387
+ "venue": "IEEE Transactions on Biomedical Engineering, 2014.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "27": {
393
+ "title": "Synrhythm: Learning a deep heart rate estimator from general to specific.",
394
+ "author": "X. Niu, H. Han, S. Shan, and X. Chen.",
395
+ "venue": "In ICPR, pages 3580\u20133585. IEEE, 2018.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "28": {
401
+ "title": "Rhythmnet: End-to-end heart rate estimation from face via spatial-temporal representation.",
402
+ "author": "X. Niu, S. Shan, H. Han, and X. Chen.",
403
+ "venue": "IEEE Transactions on Image Processing, 29:2409\u20132423, 2019.",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "29": {
409
+ "title": "Video-based remote physiological measurement via cross-verified feature disentangling.",
410
+ "author": "X. Niu, Z. Yu, H. Han, X. Li, S. Shan, and G. Zhao.",
411
+ "venue": "In ECCV, pages 295\u2013310. Springer, 2020.",
412
+ "url": null
413
+ }
414
+ },
415
+ {
416
+ "30": {
417
+ "title": "A meta-analysis of the impact of skin tone and gender on non-contact photoplethysmography measurements.",
418
+ "author": "E. M. Nowara, D. McDuff, and A. Veeraraghavan.",
419
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020.",
420
+ "url": null
421
+ }
422
+ },
423
+ {
424
+ "31": {
425
+ "title": "The benefit of distraction: Denoising camera-based physiological measurements using inverse attention.",
426
+ "author": "E. M. Nowara, D. McDuff, and A. Veeraraghavan.",
427
+ "venue": "In ICCV, pages 4955\u20134964, 2021.",
428
+ "url": null
429
+ }
430
+ },
431
+ {
432
+ "32": {
433
+ "title": "A non-contact ppg biometric system based on deep neural network.",
434
+ "author": "O. R. Patil, W. Wang, Y. Gao, W. Xu, and Z. Jin.",
435
+ "venue": "In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS). IEEE, 2018.",
436
+ "url": null
437
+ }
438
+ },
439
+ {
440
+ "33": {
441
+ "title": "Toward a robust estimation of respiratory rate from pulse oximeters.",
442
+ "author": "M. A. Pimentel, A. E. Johnson, P. H. Charlton, D. Birrenkott, P. J. Watkinson, L. Tarassenko, and D. A. Clifton.",
443
+ "venue": "IEEE Transactions on Biomedical Engineering, 2016.",
444
+ "url": null
445
+ }
446
+ },
447
+ {
448
+ "34": {
449
+ "title": "Advancements in noncontact, multiparameter physiological measurements using a webcam.",
450
+ "author": "M.-Z. Poh, D. J. McDuff, and R. W. Picard.",
451
+ "venue": "IEEE transactions on biomedical engineering, 58(1):7\u201311, 2010.",
452
+ "url": null
453
+ }
454
+ },
455
+ {
456
+ "35": {
457
+ "title": "Facenet: A unified embedding for face recognition and clustering.",
458
+ "author": "F. Schroff, D. Kalenichenko, and J. Philbin.",
459
+ "venue": "In CVPR, pages 815\u2013823, 2015.",
460
+ "url": null
461
+ }
462
+ },
463
+ {
464
+ "36": {
465
+ "title": "Non-contrastive unsupervised learning of physiological signals from video.",
466
+ "author": "J. Speth, N. Vance, P. Flynn, and A. Czajka.",
467
+ "venue": "In CVPR, 2023.",
468
+ "url": null
469
+ }
470
+ },
471
+ {
472
+ "37": {
473
+ "title": "Visual heart rate estimation with convolutional neural network.",
474
+ "author": "R. \u0160petl\u00edk, V. Franc, and J. Matas.",
475
+ "venue": "In BMVC, pages 3\u20136, 2018.",
476
+ "url": null
477
+ }
478
+ },
479
+ {
480
+ "38": {
481
+ "title": "Non-contact video-based pulse rate measurement on a mobile service robot.",
482
+ "author": "R. Stricker, S. M\u00fcller, and H.-M. Gross.",
483
+ "venue": "In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pages 1056\u20131062. IEEE, 2014.",
484
+ "url": null
485
+ }
486
+ },
487
+ {
488
+ "39": {
489
+ "title": "Contrast-phys: Unsupervised video-based remote physiological measurement via spatiotemporal contrast.",
490
+ "author": "Z. Sun and X. Li.",
491
+ "venue": "In ECCV, pages 492\u2013510. Springer, 2022.",
492
+ "url": null
493
+ }
494
+ },
495
+ {
496
+ "40": {
497
+ "title": "Self-adaptive matrix completion for heart rate estimation from face videos under realistic conditions.",
498
+ "author": "S. Tulyakov, X. Alameda-Pineda, E. Ricci, L. Yin, J. F. Cohn, and N. Sebe.",
499
+ "venue": "In CVPR, pages 2396\u20132404, 2016.",
500
+ "url": null
501
+ }
502
+ },
503
+ {
504
+ "41": {
505
+ "title": "Remote plethysmographic imaging using ambient light.",
506
+ "author": "W. Verkruysse, L. O. Svaasand, and J. S. Nelson.",
507
+ "venue": "Optics express, 16(26):21434\u201321445, 2008.",
508
+ "url": null
509
+ }
510
+ },
511
+ {
512
+ "42": {
513
+ "title": "Blending camera and 77 ghz radar sensing for equitable, robust plethysmography.",
514
+ "author": "A. Vilesov, P. Chari, A. Armouti, A. B. Harish, K. Kulkarni, A. Deoghare, L. Jalilian, and A. Kadambi.",
515
+ "venue": "ACM Trans. Graph.(SIGGRAPH), 2022.",
516
+ "url": null
517
+ }
518
+ },
519
+ {
520
+ "43": {
521
+ "title": "Self-supervised representation learning framework for remote physiological measurement using spatiotemporal augmentation loss.",
522
+ "author": "H. Wang, E. Ahn, and J. Kim.",
523
+ "venue": "AAAI, 2022.",
524
+ "url": null
525
+ }
526
+ },
527
+ {
528
+ "44": {
529
+ "title": "Algorithmic principles of remote ppg.",
530
+ "author": "W. Wang, A. C. den Brinker, S. Stuijk, and G. De Haan.",
531
+ "venue": "IEEE Transactions on Biomedical Engineering, 64(7):1479\u20131491, 2016.",
532
+ "url": null
533
+ }
534
+ },
535
+ {
536
+ "45": {
537
+ "title": "Exploiting spatial redundancy of image sensor for motion robust rppg.",
538
+ "author": "W. Wang, S. Stuijk, and G. De Haan.",
539
+ "venue": "IEEE transactions on Biomedical Engineering, 62(2):415\u2013425, 2014.",
540
+ "url": null
541
+ }
542
+ },
543
+ {
544
+ "46": {
545
+ "title": "Iris recognition: an emerging biometric technology.",
546
+ "author": "R. P. Wildes.",
547
+ "venue": "Proceedings of the IEEE, 85(9):1348\u20131363, 1997.",
548
+ "url": null
549
+ }
550
+ },
551
+ {
552
+ "47": {
553
+ "title": "Simper: Simple self-supervised learning of periodic targets.",
554
+ "author": "Y. Yang, X. Liu, J. Wu, S. Borac, D. Katabi, M.-Z. Poh, and D. McDuff.",
555
+ "venue": "In ICLR, 2022.",
556
+ "url": null
557
+ }
558
+ },
559
+ {
560
+ "48": {
561
+ "title": "A pilot study on using derivatives of photoplethysmographic signals as a biometric identifier.",
562
+ "author": "J. Yao, X. Sun, and Y. Wan.",
563
+ "venue": "In 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2007.",
564
+ "url": null
565
+ }
566
+ },
567
+ {
568
+ "49": {
569
+ "title": "Remote photoplethysmograph signal measurement from facial videos using spatio-temporal networks.",
570
+ "author": "Z. Yu, X. Li, and G. Zhao.",
571
+ "venue": "In BMVC, page 277. BMVA Press, 2019.",
572
+ "url": null
573
+ }
574
+ },
575
+ {
576
+ "50": {
577
+ "title": "Remote heart rate measurement from highly compressed facial videos: an end-to-end deep learning solution with video enhancement.",
578
+ "author": "Z. Yu, W. Peng, X. Li, X. Hong, and G. Zhao.",
579
+ "venue": "In ICCV, pages 151\u2013160, 2019.",
580
+ "url": null
581
+ }
582
+ },
583
+ {
584
+ "51": {
585
+ "title": "Physformer++: Facial video-based physiological measurement with slowfast temporal difference transformer.",
586
+ "author": "Z. Yu, Y. Shen, J. Shi, H. Zhao, Y. Cui, J. Zhang, P. Torr, and G. Zhao.",
587
+ "venue": "International Journal of Computer Vision, 2023.",
588
+ "url": null
589
+ }
590
+ },
591
+ {
592
+ "52": {
593
+ "title": "Physformer: facial video-based physiological measurement with temporal difference transformer.",
594
+ "author": "Z. Yu, Y. Shen, J. Shi, H. Zhao, P. H. Torr, and G. Zhao.",
595
+ "venue": "In CVPR, pages 4186\u20134196, 2022.",
596
+ "url": null
597
+ }
598
+ },
599
+ {
600
+ "53": {
601
+ "title": "Facial video-based remote physiological measurement via self-supervised learning.",
602
+ "author": "Z. Yue, M. Shi, and S. Ding.",
603
+ "venue": "TPAMI, 2023.",
604
+ "url": null
605
+ }
606
+ }
607
+ ],
608
+ "url": "http://arxiv.org/html/2407.04127v3"
609
+ }
20241127/2407.05784v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2407.11413v2.json ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Distributed Prescribed-Time Convex Optimization: Cascade Design and Time-Varying Gain Approach",
3
+ "abstract": "In this paper, we address the distributed prescribed-time convex optimization\n(DPTCO) problem for a class of high-order nonlinear multi-agent systems (MASs)\nunder undirected connected graphs. A cascade design framework is proposed, dividing\nthe DPTCO implementation into two parts: distributed\noptimal trajectory generator design and local reference trajectory\ntracking controller design. The DPTCO problem is then transformed\ninto the prescribed-time stabilization problem of a cascaded system.\nChanging Lyapunov function and time-varying state transformation\nmethods together with the sufficient conditions are proposed to prove\nthe prescribed-time stabilization of the cascaded system as well as\nthe uniform boundedness of internal signals in the closed-loop MASs.\nThe proposed framework is then utilized to solve robust DPTCO problem\nfor a class of chain-integrator MASs with external disturbances by\nconstructing a novel sliding-mode variables and exploiting the property of time-varying\ngains. The proposed framework is further utilized to solve the adaptive\nDPTCO problem for a class of strict-feedback MASs with parameter uncertainty,\nin which backstepping method with prescribed-time dynamic filter is\nadopted. The descending power state transformation is introduced to\ncompensate the growth of increasing rate induced by the derivative\nof time-varying gains in recursive steps and the high-order derivative\nof local reference trajectory is not required. Finally, theoretical\nresults are verified by two numerical examples.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Distributed convex optimization (DCO) has garnered extensive attention and finds numerous applications in multi-agent systems (MASs), including but not limited to,\nreliable communications in wireless networks,\ncollision avoidance among multiple robots,\neconomic dispatch in power systems,\ndistributed optimal power flow, traffic\nmanagement for large-scale railway networks, and\ntraffic metering in urban street networks.\nIn a typical DCO problem, each agent has a local objective function only known\nto itself and there is a global objective function takes the sum of local\nobjective functions. The objective is to design distributed controllers\nwith limited local information such that the output or state of each\nagent converges to the optimum of global objective function. The earliest\nwork on DCO can be tracked back to [1 ###reference_b1###], and it attracts\nincreasing interests in the last decade after the pioneer works in\n[2 ###reference_b2###].\nThe focus of DCO research is on four aspects: generalizing the type\nof objective functions [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]\nand systems [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###],\nfaster convergent rate [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 11 ###reference_b11###, 16 ###reference_b16###, 17 ###reference_b17###],\nand disturbance rejection [18 ###reference_b18###, 9 ###reference_b9###, 19 ###reference_b19###, 12 ###reference_b12###].\nThe optimization control algorithms for time-independent objective function\n[3 ###reference_b3###], time-varying objective function [4 ###reference_b4###]\nand objective function with constraints [5 ###reference_b5###]\nhave been proposed. In [6 ###reference_b6###],\nthe convexity of local objective function and strong convexity of global\nobjective function are respectively removed. Some works aim to achieve\nDCO for more general systems, such as single-integrator system in\n[7 ###reference_b7###], linear system in [8 ###reference_b8###],\nEuler-Lagrange system in [9 ###reference_b9###] and strict-feedback\nsystem in [10 ###reference_b10###, 20 ###reference_b20###]. Using sliding-mode\ncontrol and backstepping methods, the DCO controller can handle systems\nthat are high-order and nonlinear [12 ###reference_b12###]. A common\napproach to solving the DCO for high-order systems is the cascade\ndesign where the solution to the DCO problem is divided into two parts.\nThe first one is distributed optimum seeking, which by utilizing the\nlocal information interaction generates local optimal reference for\neach agent that asymptotically converges to the optimum of\nthe global objective function. The second one is to design local tracking\ncontroller to make the output or state asymptotically converge to\nthe local optimal references.\nThe convergence rate and the disturbance rejection are two concerns\nof DCO. In [17 ###reference_b17###, 11 ###reference_b11###], the finite-time\nconvergence of DCO is considered where all agents reach a consensus\nwithin a finite time interval while minimizing the global objective function.\nThe finite-time DCO for chain integrator MASs subject to mismatched disturbances\nis achieved in [19 ###reference_b19###]. Meanwhile, fixed-time convergence, where the finite settling time is independent of initial conditions,\nis shown in [13 ###reference_b13###, 16 ###reference_b16###, 15 ###reference_b15###]. In\n[14 ###reference_b14###], the predefined-time DCO is achieved by designing\na class of time-based functions, where the solution converges to a\nneighborhood of the optimum within a given time and to the optimum\nas time approaches infinity. But it cannot be extended to handle disturbances\nand high-order systems.\nIn this paper, we address the distribute prescribed-time convex optimization\n(DPTCO) for high-order nonlinear MASs with uncertainties for which\nthe solution converges to the optimum within any prescribed time.\nThe prescribed-time control is proposed to ensure that the settling\ntime does not depend on the initial values and control parameters\n[21 ###reference_b21###, 22 ###reference_b22###]. The main contribution of\nthis paper is summarized as follows.\nFirst, a DFTCO framework for a class of nonlinear MASs with disturbances\nis proposed. By embedding a cascade design, the DFTCO implementation\nis divided into two parts, namely, distributed optimal trajectory\ngenerator design and local reference trajectory tracking controller\ndesign. The DPTCO problem is then transformed into the prescribed-time\nstabilization problem of two cascaded subsystems where the first one\nis for the error of the distributed estimation towards the global\noptimum and the second one is for local tracking errors. Changing\nLyapunov function and time-varying state methods together with\nsome sufficient conditions are proposed to prove the prescribed-time\nstabilization of the cascaded system as well as the uniform boundedness\nof internal signals in the closed-loop system. A specific distributed\noptimal trajectory generator is constructed to show that the distributed\nestimation errors converges towards zero within a prescribed time.\nSecond, under the DPTCO framework, we propose a robust DPTCO algorithm\nfor a class of nonlinear chain-integrator MASs with external disturbance.\nWe design a novel sliding-mode variable and introduce a new time-varying state\ntransformation, which converts the prescribed-time stabilization problem\nof local tracking error and other states unrelated to the output into\nthe boundedness of the new variable. Different from traditional sliding-mode\ncontrol in [23 ###reference_b23###] and the prescribed-time work\nin [21 ###reference_b21###], our approach does not need the high-order\nderivative of the reference trajectory for tracking. Moreover, our\nproposed algorithm is robust for any bounded external disturbances.\nThird, we consider adaptive DPTCO for a class of strict-feedback MASs\nwith parameter uncertainty. We introduce time-varying state transformation\nof a descending power to compensate the growth of increasing rate\ninduced by derivative of time-varying gains in recursive steps. The\nbackstepping method with prescribed-time dynamic filter is adopted\nto avoid the utilization of high-order derivative of reference trajectory,\nand an adaptive law is designed to compensate parameter uncertainty.\nThe rest of the paper is organized as follows. Section 2 ###reference_###\ngives the notation and problem formulation. Section 3 ###reference_###\npresents the DPTCO framework for a type of nonlinear systems, for\nwhich Section 4 ###reference_### elaborates the\noptimal trajectory generator design. Given the DPTCO framework and\noptimal trajectory generator, robust DPTCO for chain-integrator MASs\nand adaptive DPTCO for strict-feedback MASs are considered in Sections\n5 ###reference_### and 6 ###reference_###, respectively.\nThe numerical simulation is conducted in Section 7 ###reference_###\nand the paper is concluded in Section 8 ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Notations and Problem Formulation",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Notations",
21
+ "text": ", and denote\nthe set of real numbers, the set of non-negative real numbers, and\nthe -dimensional Euclidean space, respectively. denotes\nthe initial time, the prescribed-time scale, and \nthe corresponding time interval. Define functions\n, ,\n means that \nfor any . The symbol (or )\ndenotes an -dimensional column vector whose elements are all \n(or ). For , \nfor , while be the inverse\nfunction of for .\nAn undirected graph is denoted as ,\nwhere is the node set and \nis the edge set. The existence of an edge means\nthat nodes , can communicate with each other. Denote by \nthe weighted adjacency matrix, where \nand otherwise. A self edge is not allowed, i.e., .\nThe Laplacian matrix of graph is denoted\nas , where ,\n with . If is connected,\nthen the null space of is spanned by , and\nall the other eigenvalues of are strictly positive."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Problem Formulation",
27
+ "text": "Consider the nonlinear MASs\nwhere , , \nare system state, output and control input of -th agent, respectively.\n\ndenotes the system\u2019s uncertainties or external disturbances where\n is a compact set belonging to \nand it is possibly time-varying. ,\n\nare smooth functions of their arguments satisfying \nand for any \nand . The output feedback\nsystem (1 ###reference_###) contains various specific types [24 ###reference_b24###],\ni.e., chain-integrator system [21 ###reference_b21###], strict-feedback\nsystem [25 ###reference_b25###] and feedforward system [26 ###reference_b26###].\nIn this paper, we consider the following convex optimization problem\nwhere is the lumped output of MASs in (1 ###reference_###), and is the local scalar objective function, which is convex and known only to agent . Motivated by the results in [9 ###reference_b9###], this paper assumes that gradient function of local objective function is available. Due to equality constraints, the optimum of optimization problem (2 ###reference_###) has the form for some .\nThe objective of the DPTCO\nis, for any prescribe-time , using local information interactions to design distributed controllers\n such that the outputs converge to the optimum\n within ,\ni.e.,\nirrespective of system initial value and any other control parameters\nbesides . Moreover, the state , the output and\ncontrol input must be bounded, i.e.,\n\nholds for and .\nIn order to achieve the DPTCO,\nthe function\nis used throughout the paper as the time-varying gain. The function\n increases to infinity as approaches the prescribed-time\n and is commonly used in the prescribed-time control. For\n, one has \nWe simplify as throughout this paper if no confusion\noccur. For any and ,\ndefine\nwhere we note converges to\nzero as for any and .\nWe study the problem under these two common assumptions.\nThe undirected graph is connected.\nFor each \nthe function is first-order differentiable, and \nas well as its gradient are only\nknown to -th agent. Moreover, it is -strongly convex\nand has -Lipschitz gradients, i.e., for , and ,\nwhere and are positive constants.\nUnder Assumption 2.2 ###reference_ass2###, is strongly\nconvex as is for . Therefore, if the optimization\nproblem (2 ###reference_###)\nis solvable, the optimum is unique. We need the following assumption\nfor the optimization problem to be sensible.\nThe optimal value of global objection function (2 ###reference_###),\ndenoted as , is finite and the optimum set\nis nonempty and compact [27 ###reference_b27###].\nA function \nis said to belong to class , it is strictly\nincreasing and .\nA continuous function \nis said to belong to class if, for each fixed\n, the mapping belongs to class \nwith respect to and, for each fixed , the mapping \nis decreasing with respect to and satisfies \nas . The function is said to belong class \nif belongs to class and for each\nfixed , the mapping belongs to class \nwith respect to .\n[28 ###reference_b28###] Consider\nthe system where \nis the state and is\nthe external input. For any given , the -dynamics is\nsaid to be prescribed-time stable if there exits \nsuch that for and ,\n holds for \nwhere .\nThe continuously differentiable function \nis called the prescribed-time stable Lyapunov function for the system\n, if and its derivative along the trajectory\nof the system satisfy, for all and ,\nwhere , , are\n functions and is denoted in (4 ###reference_###).\n is called prescribed-time convergent gain.\nThe inequalities in (7 ###reference_###) are simplified as .\nThe continuously differentiable function \nis called the prescribed-time input-to-state stable (ISS) Lyapunov\nfunction for the system with \nbeing the external input, if and its derivative along the\ntrajectory of the system satisfy, for all and\n,\nwith , , , \nand . ,\n and are called prescribed-time\nconvergent, prescribed-time ISS gain and (normal)\nISS gain, respectively. The inequalities in (8 ###reference_###) are\nsimplified as .\nWhen contains multiple inputs as \nwhere , the second inequality of (8 ###reference_###)\nbecomes \nand the inequalities are simplified as ."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "A Cascade Design Approach",
33
+ "text": "The cascade design approach has been used for the distributed convex\noptimization problem in [10 ###reference_b10###, 12 ###reference_b12###, 9 ###reference_b9###].\nFollowing the cascade design principle, the optimal agreement can\nbe decomposed into two subproblems, namely the distributed optimum\nseeking and local reference trajectory tracking. To this end, we propose\nthe controller in the general form of\nwhere \nis the relative information received by -th agent from its neighbors\nand . -dynamics is designed\nto estimate in (6 ###reference_###).\nThe state of can be decomposed as \nwhere -dynamics can be designed to adaptively find the gradient\nof the local objective function . -dynamics\nis similar to a PI controller and designed to admit the equilibrium\npoint \nwith some known function . \nis the local controller state used to construct the actual control\ninput for the tracking."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Coordinate Transformation and Cascaded Error System",
39
+ "text": "For , define the error states\nNote that and are the error from the\ndistributed optimal value seeking, is the optimal tracking\nerror and is the local tracking error towards the local\nestimated optimal value . Define the lumped vectors ,\n, \nand . Note that \nand . Then closed-loop system composed\nof (1 ###reference_###), (9 ###reference_###), and (10 ###reference_###)\ncan be castled into the error dynamics as follows\nwhere and in (12 ###reference_###)\ncan be derived from the definition, and\n.\nAs illustrated in Fig. 1 ###reference_###,\nthe error system is in a cascaded form. With the decomposition of\n in (13 ###reference_###), in order to show (3 ###reference_###),\nit suffices to prove the prescribed-time stability of - and\n-dynamics, i.e., there exist functions\n, such that\n###figure_1###"
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Prescribed-time Stabilization of Cascaded System",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "3.2.1",
49
+ "parent_section_id": "3.2",
50
+ "section_name": "3.2.1 Changing Lyapunov Function Method",
51
+ "text": "We propose three conditions sufficient for prescribed-time stabilization\nof the cascaded system (12 ###reference_###)-(13 ###reference_###).\n: -dynamics in (12 ###reference_###)\nadmits a prescribed-time Lyapunov function \nsuch that\n\nholds;\n: -dynamics in (12 ###reference_###)\nadmits a prescribed-time ISS Lyapunov function \nsuch that \nholds for some and ;\n: in (12 ###reference_###)\nsatisfies \nfor some ; \nin and in (13 ###reference_###) satisfy and for \nand some .\nNote that condition implies that .\nInvoking comparison lemma leads to \nwhere is denoted in (5 ###reference_###).\nDue to , it gives\nshowing that the state of the first subsystem goes to zero at prescribed-time\n and the first inequality in (14 ###reference_###) is\nachieved. In order to investigate how the -dynamics affects\nthe convergence of -dynamics, we introduce the change\nof the Lyapunov function for the -dynamics as\nwith . Then, the prescribed-time convergence\nresult for the whole system is given in the following theorem.\nConsider the system composed of (1 ###reference_###),\n(9 ###reference_###) and (10 ###reference_###). Suppose the\nclosed-loop system (12 ###reference_###)-(13 ###reference_###) after the\nstate transformation satisfies conditions -.\nDefine functions \nand \nwith some and .\nSuppose\nand there exists a function \nfor (16 ###reference_###) such that\nhold. Then, the problem of DPTCO is solved for any bounded initial condition.\nProof: Due to , one has .\nTaking time derivative of in (16 ###reference_###)\nand using (18 ###reference_###) yields ,\nwhere \nand .\nInvoking comparison lemma yields\nDenote the bound of as . Given (20 ###reference_###)\nwith , one has .\nAs a result, the second term on the right-hand of (22 ###reference_###)\ncan be calculated as\nwhere \nis a finite constant. By (15 ###reference_###), one has\nwhere .\nSimilar to (23 ###reference_###), due to (21 ###reference_###) and (24 ###reference_###),\nthe third term on the right-hand of (22 ###reference_###) satisfies\n,\nwhere \nis a finite constant. Consequently, .\nThen according to (16 ###reference_###), satisfies\nwhere . (25 ###reference_###) means the\nsecond equation in (14 ###reference_###) is achieved. As a result,\nthe DPTCO is achieved.\nNext, we prove the boundedness of , ,\n. By (15 ###reference_###), (17 ###reference_###), (19 ###reference_###)\nand (25 ###reference_###), ,\n\nhold for some finite constants , ,\nand . Since , ,\n satisfy ,\nthese inequalities imply that , ,\n are bounded for . This completes\nthe proof."
52
+ },
53
+ {
54
+ "section_id": "3.2.2",
55
+ "parent_section_id": "3.2",
56
+ "section_name": "3.2.2 Time-varying State Transformation",
57
+ "text": "A common practice in the literature of prescribed-time control [22 ###reference_b22###, 21 ###reference_b21###]\nis the time-varying state transformation technique. When \nis not feasible, we can seek a time-varying state transformation\nwhere \nis a differentiable function. Generally, the mapping from \nto is nonlinear. The -dynamics\nbecomes\nwhere we used . Due to the nonlinearity of ,\n may not guarantee .\nWith the time-varying state transformation, the closed-loop system\ncomposed of (1 ###reference_###), (9 ###reference_###), and (10 ###reference_###)\ncan be casted into the error dynamics as follows\nand -dynamics and in (9 ###reference_###), (10 ###reference_###)\ncan rewritten as\nwith some functions and \nderived from (9 ###reference_###), (10 ###reference_###) and (26 ###reference_###).\nSimilarly, may not guarantee \nand . We modify conditions ,\n to ,\n as follows.\n: There exists a time-varying\nstate transformation (26 ###reference_###) such that -dynamics\nin (27 ###reference_###) admits a prescribed-time ISS Lyapunov function\n\nand \nholds for some and\n. Moreover,\nthe boundedness of implies prescribed-time convergence\nof .\n: in\nand in (28 ###reference_###) satisfy and \nfor where ,\n, \nand , .\nConsider the system composed of (1 ###reference_###),\n(9 ###reference_###) and (10 ###reference_###). Suppose the\nclosed-loop system (13 ###reference_###) and (27 ###reference_###)\nafter the state transformation satisfies conditions ,\n, \nwith\nwhere is defined in Theorem 3.1 ###reference_theorem1###, and\nhold. Then, the problem of DPTCO is solved for any bounded initial condition.\nProof: Due to , one has .\nInvoking comparison lemma yields\nSimilar to the deviations in (23 ###reference_###),\nby (30 ###reference_###) and (31 ###reference_###) , the bound of \nsatisfies ,\nwhere and .\nThe inequality implies that \nis bounded. Since the boundedness of implies\nthe prescribed-time convergence of by condition ,\nthe second equation in (14 ###reference_###) is achieved and outputs\nof the agents converge to the optimum within prescribed time.\nSimilar to the proof of Theorem 3.1 ###reference_theorem1###, by (29 ###reference_###),\nwe have ,\nand then the boundedness of all signals is guaranteed."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Prescribed-time Optimum Seeking",
63
+ "text": "In this section, we elaborate the design of -dynamics.\nThe two subsystems of -dynamics, namely -\nand -dynamics, are designed as,\nwhere is a differentiable function\nto be designed.\nLet and \nbe such that , . Therefore, \nand is an orthogonal matrix. Define ,\n, ,\n and .For a connected graph, is a positive matrix and\n where\n and are the second smallest and largest\neigenvalues of , respectively. Let \nand . The dynamics (32 ###reference_###)\nand (33 ###reference_###) for the group of agents can be written compactly\nas\nwhere .\nNote that the system (34 ###reference_###) and (35 ###reference_###) is\nin the form of (9 ###reference_###). We have the following proposition, with proof given in appendix.\nConsider (34 ###reference_###) and\n(35 ###reference_###) under Assumption 2.1 ###reference_ass1###, 2.2 ###reference_ass2###\nand 2.3 ###reference_ass3###. Let satisfies (6 ###reference_###) and thus \nbe the optimum to the optimization problem (2 ###reference_###).\nThen\nis the solution of\nwhen the initial value of satisfies .\nAs introduced in Section 3 ###reference_###, we use the coordinate\ntransformation , \nwith and being the error variables for distributed\noptimal value seeking problem. From Proposition 4.1 ###reference_proposition1###, (34 ###reference_###)\nand (35 ###reference_###), -dynamics can be obtained, with ,\nas\nwhere .\nConsider -dynamics in (32 ###reference_###)\nand (33 ###reference_###) under Assumption 2.1 ###reference_ass1###, 2.2 ###reference_ass2###\nand 2.3 ###reference_ass3###. Define\nwhere and are given in Assumption 2.2 ###reference_ass2###.\nIf and\nholds for , then -dynamics satisfies\ncondition with\nMoreover, the bounds of and satisfy\nfor some .\nThe proof is given in appendix."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Robust DPTCO for Chain-Integrator MASs",
69
+ "text": "In this section, we apply the DPTCO framework proposed in Section 3 ###reference_### to solve the robust DPTCO for a class of nonlinear MASs with uncertainties, called chain-integrator MASs of a relative degree greater than one.\nSince we deal with\nthe optimal tracking problem for each subsystem separately, we\nomit the superscript for simplicity when no confusion is raised. Therefore, the -th subsystem is expressed as\nwhere is the system\nstate with , \ncontrol input, system output, and \nthe uncertainties belonging to a compact set .\nThe function \nis sufficiently smooth and for each fixed it is bounded for all\n [29 ###reference_b29###].\nAccording to [30 ###reference_b30###, Lemma 11.1], the function\n satisfies\nwhere is an unknown positive function\nand is a known positive function and is bounded for all\n.Note that (45 ###reference_###)\nis in the form of (1 ###reference_###).\nWe follow the framework developed in Section 3 ###reference_###\nto solve the DPTCO problem. First, define the error as in (11 ###reference_###),\ni.e.,\nwhere is given in (32 ###reference_###) and \nis omitted in this section. Due to (14 ###reference_###) and Theorem\n4.1 ###reference_theorem1###, it suffices to design controller such\nthat the prescribed-time stabilization is achieved for .\nLet such that\nis Hurwitz, for , \nand is\nwhere is a first-order differentiable\nfunction to be designed. Since the system (45 ###reference_###) is\nnonlinear and has the relative degree greater than one and the reference\ntrajectory does not have the higher-order derivatives,\nthe traditional sliding-mode based tracking control cannot be applied\n[23 ###reference_b23###]. Instead, we construct a new variable\n as\nwith\nThen, we define the time-varying state transformation as\nwith a first-order differentiable \nto be designed. By doing so, we introduce the time-varying state transformation\nfrom to as\nwhich coincides with the procedure in Section 3 ###reference_###.\nDefine functions\n,\n\nand .\nBy (45 ###reference_###), (49 ###reference_###), and (51 ###reference_###),\n-dynamics can be expressed as\nwhere \nand\nSince is Hurwitz, there exist positive matrices , \nsuch that .\nDefine two constants\nThen, we propose the following design criteria (DC)\nfor functions in (49 ###reference_###)\nand in (52 ###reference_###) such that the time-varying\nstate transformation (52 ###reference_###) and the -dynamics\nsatisfy in Section 3.2.2 ###reference_.SSS2###.\n: satisfies\n and ,\nwhere , are given in (55 ###reference_###) and \nis given in (40 ###reference_###);\n: is chosen as .\nConsider the system (45 ###reference_###),\n-dynamics in (32 ###reference_###) and -dynamics\nin (33 ###reference_###) with time-varying state transformation (52 ###reference_###).\nIf conditions in Theorem 4.1 ###reference_theorem1### and two design criteria\n- hold, then\nthe bound of satisfies\nfor some function \nand .\nGiven in the appendix, the proof of Lemma 5.1 ###reference_lemma1###\nimplies that when is bounded for ,\nthe prescribed-time convergence of is achieved. Therefore,\nit suffices to design the controller in (53 ###reference_###)\nsuch that the closed-loop system for admits a prescribed-time\nISS Lyapunov function as in and \nis bounded for . Then, we design the controller\n as\nwith , and function defined in (46 ###reference_###).\nFor simplicity, we define\nwhere is\nintroduced in (51 ###reference_###). Note that and \nare functions.\nConsider the system (45 ###reference_###)\nwith the controller (57 ###reference_###), -dynamics in (32 ###reference_###)\nand -dynamics in (33 ###reference_###) with time-varying state\ntransformation (52 ###reference_###). If conditions in Theorem 4.1 ###reference_theorem1###\nand two design criteria -\nhold, then -dynamics satisfies condition .\nMoreover, it admits the prescribed-time ISS Lyapunov function in (omitting superscript ) with\nwhere .\nAnd the controller satisfies \nwith\nfor some finite constants , \nand .\nApplying Theorem 3.2 ###reference_theorem2###, 4.1 ###reference_theorem1### and Lemma 5.1 ###reference_lemma1###,\n5.2 ###reference_lemma2###, we obtain the following results.\nConsider the system composed of (32 ###reference_###),\n(33 ###reference_###), (45 ###reference_###) and (57 ###reference_###). If\nconditions in Theorem 4.1 ###reference_theorem1### and two design criteria\n- hold, the\nDPTCO problem for the chain integrator MASs (45 ###reference_###)\nis solved."
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "Adaptive DPTCO for Strict-Feedback MASs",
75
+ "text": "In this section, to further examine the generality of proposed DPTCO framework proposed in Section 3 ###reference_###,\nwe consider the adaptive DPTCO problem for a class of nonlinear strict-feedback MASs with parameter uncertainty, as follows,\nwhere is the system\nstate with , \nis output and is control input. \nis an unknown constant and \nis a known function with for .\nFor simplicity, we omit the superscript when no confusion\nis raised.\n[31 ###reference_b31###]\nFor , is first-order differentiable\nand locally Lipschitz function.\nUnder Assumption 6.1 ###reference_ass1###,\ndue to , by the mean value theorem, there exists\ncontinuous matrix-valued function \nsuch that\nwhere and its first derivative with respect to\n are continuous and bounded. Without losing generality, we assume\n, \nhold for , where and \nare some positive finite constants.\nFollowing the procedure in Section 3 ###reference_###,\nwe define the error states according to (11 ###reference_###) as\nwhere is given in (32 ###reference_###) and\nis the controller state where is the\nestimator of unknown parameter and \nis the dynamic filter variable to be designed.\nTo facilitate the stability analysis and simplify the derivation,\nwe introduce the coordinate transformation as\nwhere for is to be determined, \nto be designed, is the virtual\ncontroller and and -dynamics are designed\nas\nwith for and to be determined,\nand\nWe further introduce the time-varying state transformation for (63 ###reference_###)\nas\nwhere ,\n, \nwith\nwhere with ,\n. By doing so, we in fact introduce the time-varying\nstate transformation from to as\nwith ,\n,\n,\n, and\n.\nAs a result, the -dynamics can be expressed as \nWe propose the design criterion for functions\n and .\n: satisfies and \nfor , where where is denoted in\n(40 ###reference_###).\nConsider the system (61 ###reference_###),\n-dynamics in (32 ###reference_###) and -dynamics\nin (33 ###reference_###) with time-varying state transformation (69 ###reference_###).\nIf conditions in Theorem 4.1 ###reference_theorem1### and the design criterion\n hold, then the bound of \nsatisfies\nfor some function \nand .\nThe proof of Lemma 6.1 ###reference_lemma1### is given in the appendix. It implies\nthat when is bounded for ,\nthe prescribed-time convergence of is achieved. Then,\nthe controller is designed as\nwhere is designed in (64 ###reference_###).\nConsider the system (61 ###reference_###)\nwith the controller (71 ###reference_###), -dynamics in (32 ###reference_###)\nand -dynamics in (33 ###reference_###) with time-varying state\ntransformation (69 ###reference_###) under Assumption 6.1 ###reference_ass1###.\nSuppose conditions in Theorem 4.1 ###reference_theorem1### and the\ndesign criterion hold. Then,\nthere always exists a set of parameters for ,\n for and such that \nis an invariant set where \nand the DPTCO problem for strict-feedback MASs (61 ###reference_###)\nis solved."
76
+ },
77
+ {
78
+ "section_id": "7",
79
+ "parent_section_id": null,
80
+ "section_name": "Simulation Results",
81
+ "text": "In this section, we show two numerical examples to illustrate the\ntheoretical results. The graph for the two simulations is given by .\n(Robust DPTCO for Euler-Lagrange MASs)\nConsider the Euler-Lagrange MASs as , , \nwhere with ,\n, , and \n, , ,\n, , \nare unknown parameters for , and . Note that\nthe system is in the form of the chain-integrator systems in (45 ###reference_###)\nand satisfies (46 ###reference_###) due to the structural property of\nEuler-Lagrange systems.\nThe six robots are located in a thermal radiation field, and the relationship\nbetween the intensity of thermal radiation , temperature \nand distance can be roughly expressed as\n,\nwhere denotes the two-dimensional coordinates of the heat\nsource. Suppose each robot is capable of measuring the gradient information\nof the heat source with respect to distance. The objective is to design\ncontroller such that the six robots approach the heat source\nin a formation, and reduce the total displacement of the six robots\nfrom their original location. Thus, the global objective function\nis designed as\n\nwhere , , ,\n, , \nrepresent the formation shape, and and \nare objective weights.\nBy defining , the optimization problem\nis transformed into ,\nwhich is consistent with (2 ###reference_###). For the optimization\nproblem, we design -dynamics as\nin the form of (32 ###reference_###), (33 ###reference_###) such\nthat converges to the optimum within prescribed\ntime. Then, the reference trajectory for each robot dynamics is changed\nas\n.\nReplacing in Section 4 ###reference_###\nwith , we can design the controller following the procedures\nin Section 5 ###reference_### to solve the optimization problem. Let the initial condition to be ,\n, , ,\n, , ,\n, for .\nThe initial time is set as , and the prescribed-time\nscale . The parameters and gain functions are chosen as ,\n, , , ,\n. The weight coefficients \nand for objective function\nare chosen as and for .\nThe coordinate of heat source is set as .\n###figure_2### ###figure_3### ###figure_4### ###figure_5### The simulation results are shown in Figure.\n2 ###reference_### and 5 ###reference_###. In Figure. 5 ###reference_###, \nand converge to zero within , and thus the validity\nof the optimal trajectory generator designed in Section 4 ###reference_###\nis verified. In Figure. 2 ###reference_###, the six robots approach\nthe heat source in formation within the prescribed time.\n(Adaptive DPTCO for strict-feedback MASs) Consider\nthe strict-feedback MASs in the presence of parameter uncertainties\nas , , , ,\nwhere , .\n. The\nlocal objective function of each agent is ,\nwhere , ,\n,\n and are positive definite matrices. Using Global Optimization\nToolbox in MATLAB, the optimal agreement is\n,\nwhich is used for verification only. The parameters are chosen as\n, , , ,\n, , . ,\n. The initial values are ,\n, , ,\n, , ,\n, ,\nand , are the same as that in Example\n1.\nThe simulation results are shown in Figure. 5 ###reference_### and\n5 ###reference_###. In Figure. 5 ###reference_###, the tracking\nerror between each agent\u2019s output and optimum is bounded\nand achieves prescribed-time convergence towards zero. For simplicity,\nwe only provide the trajectories of , ,\n in Figure. 5 ###reference_###. These\ntrajectories show that we achieve prescribed-time convergence towards\nzero for , and ."
82
+ },
83
+ {
84
+ "section_id": "8",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusion",
87
+ "text": "In this paper, we propose a novel DPTCO algorithm for a class of high-order\nnonlinear MASs. A DPTCO framework is first constructed embedding the\ncascade design such that the DPTCO problem is divided into optimum seeking for thewhole system and reference trajectory tracking\nproblem for each agent. The DPTCO framework is then utilized to solve\nDPTCO problem for chain integrator MASs and strict-feedback MASs.\nThe prescribed-time convergence lies in the time-varying gains which\nincrease to infinity as time approaches the prescribed time. When\nsolving the tracking problem for the two specific MASs, high-order\nderivative of reference trajectory is not required. It would be very\ninteresting to further consider the DPTCO where the local objective functions\nsubject to bound, equality, and inequality constraints."
88
+ }
89
+ ],
90
+ "appendix": [],
91
+ "tables": {},
92
+ "image_paths": {
93
+ "1": {
94
+ "figure_path": "2407.11413v2_figure_1.png",
95
+ "caption": "Figure 1: Cascaded system \u03a3=[\u03a31T,\u03a32T]T\u03a3superscriptsuperscriptsubscript\u03a31Tsuperscriptsubscript\u03a32TT\\Sigma=[\\Sigma_{1}^{\\mbox{\\tiny{T}}},\\Sigma_{2}^{\\mbox{\\tiny{T}}}]^{\\mbox{%\n\\tiny{T}}}roman_\u03a3 = [ roman_\u03a3 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT , roman_\u03a3 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT ] start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT\nwith d=[(d1)T,\u22ef,(dN)T]T\ud835\udc51superscriptsuperscriptsuperscript\ud835\udc511T\u22efsuperscriptsuperscript\ud835\udc51\ud835\udc41TTd=\\left[(d^{1})^{\\mbox{\\tiny{T}}},\\cdots,(d^{N})^{\\mbox{\\tiny{T}}}\\right]^{%\n\\mbox{\\tiny{T}}}italic_d = [ ( italic_d start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT , \u22ef , ( italic_d start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT ] start_POSTSUPERSCRIPT T end_POSTSUPERSCRIPT.",
96
+ "url": "http://arxiv.org/html/2407.11413v2/x1.png"
97
+ },
98
+ "2": {
99
+ "figure_path": "2407.11413v2_figure_2.png",
100
+ "caption": "Figure 2: Trajectories of positions x1isuperscriptsubscript\ud835\udc651\ud835\udc56x_{1}^{i}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT of the six\nrobots for 0\u2264t<T0\ud835\udc61\ud835\udc470\\leq t<T0 \u2264 italic_t < italic_T, where \u2219\u2219\\bullet\u2219 and \u25b2\u25b2\\blacktriangle\u25b2\ndenote the initial and final position, \u25cb\u25cb\\bigcirc\u25cb denotes the equipotential\nlines of P\ud835\udc43Pitalic_P.",
101
+ "url": "http://arxiv.org/html/2407.11413v2/x2.png"
102
+ },
103
+ "3": {
104
+ "figure_path": "2407.11413v2_figure_3.png",
105
+ "caption": "Figure 3: The trajectories of er\u2062(t)subscript\ud835\udc52\ud835\udc5f\ud835\udc61e_{r}(t)italic_e start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_t ) and e\u02d9r\u2062(t)subscript\u02d9\ud835\udc52\ud835\udc5f\ud835\udc61\\dot{e}_{r}(t)over\u02d9 start_ARG italic_e end_ARG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT ( italic_t )\n",
106
+ "url": "http://arxiv.org/html/2407.11413v2/x3.png"
107
+ },
108
+ "4": {
109
+ "figure_path": "2407.11413v2_figure_4.png",
110
+ "caption": "Figure 4: The trajectories of tracking error between\neach agent\u2019s output and optimum\n",
111
+ "url": "http://arxiv.org/html/2407.11413v2/x4.png"
112
+ },
113
+ "5": {
114
+ "figure_path": "2407.11413v2_figure_5.png",
115
+ "caption": "Figure 5: The trajectories of \u03b8^1\u2062(t)superscript^\ud835\udf031\ud835\udc61\\hat{\\theta}^{1}(t)over^ start_ARG italic_\u03b8 end_ARG start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( italic_t ),\n\u2016x21\u2062(t)\u2016normsuperscriptsubscript\ud835\udc6521\ud835\udc61\\|x_{2}^{1}(t)\\|\u2225 italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( italic_t ) \u2225, \u2016x31\u2062(t)\u2016normsuperscriptsubscript\ud835\udc6531\ud835\udc61\\|x_{3}^{1}(t)\\|\u2225 italic_x start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ( italic_t ) \u2225.\n",
116
+ "url": "http://arxiv.org/html/2407.11413v2/x5.png"
117
+ }
118
+ },
119
+ "validation": true,
120
+ "references": [
121
+ {
122
+ "1": {
123
+ "title": "Crc Press, 1998.",
124
+ "author": "C. Edwards and S. Spurgeon, Sliding mode control: theory and\napplications.",
125
+ "venue": null,
126
+ "url": null
127
+ }
128
+ }
129
+ ],
130
+ "url": "http://arxiv.org/html/2407.11413v2"
131
+ }
20241127/2408.06157v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2408.07401v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2408.10511v3.json ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Single-cell Curriculum Learning-based Deep Graph Embedding Clustering",
3
+ "abstract": "The swift advancement of single-cell RNA sequencing (scRNA-seq) technologies enables the investigation of cellular-level tissue heterogeneity. Cell annotation significantly contributes to the extensive downstream analysis of scRNA-seq data. However, The analysis of scRNA-seq for biological inference presents challenges owing to its intricate and indeterminate data distribution, characterized by a substantial volume and a high frequency of dropout events. Furthermore, the quality of training samples varies greatly, and the performance of the popular scRNA-seq data clustering solution GNN could be harmed by two types of low-quality training nodes: 1) nodes on the boundary; 2) nodes that contribute little additional information to the graph.\nTo address these problems, we propose a single-cell curriculum learning-based deep graph embedding clustering (scCLG).\nWe first propose a Chebyshev graph convolutional autoencoder with multi-criteria (ChebAE) that combines three optimization objectives, including topology reconstruction loss of cell graphs, zero-inflated negative binomial (ZINB) loss, and clustering loss, to learn cell-cell topology representation.\nMeanwhile, we employ a selective training strategy to train GNN based on the features and entropy of nodes and prune the difficult nodes based on the difficulty scores to keep the high-quality graph.\nEmpirical results on a variety of gene expression datasets show that our model outperforms state-of-the-art methods.\nThe code of scCLG will be made publicly available at https://github.com/LFD-byte/scCLG.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The advent of single-cell RNA sequencing (scRNA-seq) technologies has enabled the measurement of gene expressions in a vast number of individual cells, offering the potential to deliver detailed and high-resolution understandings of the intricate cellular landscape. The analysis of scRNA-seq data plays a pivotal role in biomedical research, including identifying cell types and subtypes, studying developmental processes, investigating disease mechanisms, exploring immunological responses, and supporting drug development and personalized therapy [1 ###reference_b1###]. Cell annotation is the fundamental step in analyzing scRNA-seq data. In early research, various traditional clustering methods have been applied such as K-means, spectral clustering, hierarchical clustering and density-based clustering. However, scRNA-seq data are so sparse that most of the measurements are zeros. The traditional clustering algorithm often produces suboptimal results.\nSeveral clustering methods have been developed to address these limitations. CIDR [2 ###reference_b2###], MAGIC [3 ###reference_b3###], and SAVER [4 ###reference_b4###] have been developed to initially address the issue of missing values, commonly referred to as dropouts, followed by the clustering of the imputed data. Despite the benefits of imputation, these methods encounter challenges in capturing the intricate inherent structure of scRNA-seq data. Alternative strategies, such as SIMLR [5 ###reference_b5###] and MPSSC [6 ###reference_b6###], utilize multi-kernel spectral clustering to acquire robust similarity measures. Nevertheless, the computational complexity associated with generating the Laplacian matrix hinders their application to large-scale datasets. Additionally, these techniques fail to account for crucial attributes of transcriptional data, including zero inflation and over-dispersion.\nIn recent years, deep learning has shown excellent performance in the fields of image recognition and processing, speech recognition, recommendation systems, and autonomous driving [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###].\nSome deep learning clustering methods have effectively emerged to model the high-dimensional and sparse nature of scRNA-seq data such as scziDesk [12 ###reference_b12###], scDCC [13 ###reference_b13###], and scDeepCluster [14 ###reference_b14###]. These models implement auto-encoding architectures. However, they often ignore the cell-cell relationships, which can make the clustering task more challenging. Recently, the emerging graph neural network (GNN) has deconvoluted node relationships in a graph through neighbor information propagation in a deep learning architecture. scGNN [15 ###reference_b15###] and scGAE [16 ###reference_b16###] combine deep autoencoder and graph clustering algorithms to preserve the neighborhood relationships. However, their training strategies largely ignore the importance of different nodes in the graph and how their orders can affect the optimization status, which may result in suboptimal performance of the graph learning models.\nIn particular, curriculum learning (CL) is an effective training strategy for gradually guiding model learning in tasks with obvious difficulty levels [17 ###reference_b17###]. Curriculum learning has applications in natural language processing, computer vision, and other fields that require processing complex data. However, research on scRNA-seq data clustering is still blank, and the impact of traditional curriculum learning methods retaining all data on removing difficult samples on the model has not been explored yet.\nMotivated by the above observations, we propose here a single-cell curriculum learning-based deep graph embedding clustering name scCLG, which simultaneously learns cell-cell topology representations and identifies cell clusters from an autoencoder following an easy-to-hard pattern (Fig. 1 ###reference_###).\nWe first propose a Chebyshev graph convolutional autoencoder with multi-criteria (ChebAE) to preserve the topological structure of the cells in the low-dimensional latent space (Fig. 2 ###reference_###).\nThen, with the help of feature information, we design a hierarchical difficulty measurer, in which two difficulty measurers from local and global perspectives are proposed to measure the difficulty of training nodes. The local difficulty measurer computes local feature distribution to identify difficult nodes because their neighbors have diverse labels; the global difficulty measurer identifies difficult nodes by calculating the node entropy and graph entropy.\nAfter that, the most difficult nodes will be pruned to keep the high-quality graph.\nFinally, scCLG can combine three optimization objectives, including topology reconstruction loss of cell graphs, zero-inflated negative binomial (ZINB) loss, and clustering loss, to learn cell-cell topology representation, optimize cell clustering label allocation, and produce superior clustering results.\nThe main contributions of our work are summarized below:\nWe propose a single-cell curriculum learning-based deep graph embedding clustering called scCLG, which integrates the meaningful training order into a Chebyshev graph convolutional autoencoder to capture the global probabilistic structure of data.\nscCLG constructs a cell graph and uses a Chebyshev graph convolutional autoencoder to collectively preserve the topological structural information and the cell-cell relationships in scRNA-seq data.\nTo the best of our knowledge, this is the first article to incorporate curriculum learning with data pruning into a graph convolutional autoencoder to model highly sparse and overdispersed scRNA-seq data.\nWe evaluate our model alongside state-of-the-art competitive methods on 7 real scRNA-seq datasets. The results demonstrate that scCLG outperforms all of the baseline methods."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Related Work",
15
+ "text": "scRNA-seq clustering. With the advent of deep learning (DL), more recent works have utilized deep neural networks to automatically extract features from scRNA-seq data for enhancing feature representation.\nscDC [14 ###reference_b14###] simultaneously learns to feature representation and clustering via explicit modeling of scRNA-seq data generation.\nIn another work, scziDesk [12 ###reference_b12###] combines deep learning with a denoising autoencoder to characterize scRNA-seq data while proposing a soft self-training K-means algorithm to cluster the cell population in the learned latent space.\nscDCC [13 ###reference_b13###] integrates prior knowledge to loss function with pairwise constraints to scRNA-seq.\nThe high-order representation and topological relations could be naturally learned by the graph neural network.\nscGNN [15 ###reference_b15###] introduces a multi-modal autoencoder framework. This framework formulates and aggregates cell\u2013cell relationships with graph neural networks and models heterogeneous gene expression patterns using a left-truncated mixture Gaussian model.\nscGAE [16 ###reference_b16###] builds a cell graph and uses a multitask\u2011oriented graph autoencoder to preserve topological structure information and feature information in scRNA\u2011seq data simultaneously. However, the above clustering methods overlook the learning difficulty of different samples or nodes.\nCurriculum learning. Curriculum learning, which mimics the human learning process of learning data samples in a meaningful order, aims to enhance the machine learning models by using a designed training curriculum, typically following an easy-to-hard pattern [17 ###reference_b17###].\nThe CL framework consists of two components: a difficulty measurer which measures the difficulty of samples and a training scheduler which arranges the ordered samples into training. The key to CL is how to define the promising measurer. SPCL [18 ###reference_b18###] takes into account both prior knowledge known before training and the learning progress during training. CLNode [19 ###reference_b19###] measures the difficulty of training nodes based on the label information. SMMCL [20 ###reference_b20###] assumes that different unlabeled samples have different difficulty levels for propagation, so it should follow an easy-to-hard sequence with an updated curriculum for label propagation.\nscSPaC [21 ###reference_b21###] utilizes an advanced NMF for scRNA-seq data clustering based on soft self-paced learning, which gradually adds cells from simple to complex to our model until the model converges. However, the above CL methods don\u2019t utilize the structural information of nodes in graph neural networks and don\u2019t consider the impact of difficult nodes on the graph.\n###figure_1###"
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III PRELIMINARIES",
21
+ "text": "In this section, we first introduce some notations, symbols, and necessary background. Then we present the Chebyshev graph convolution."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Notations",
27
+ "text": "Let be an undirected cell graph, where is a set of nodes associated with different cells; specifies the existence of an edge between the and nodes; and is the node feature matrix and element is the count of the gene in the cell. Let be the adjacency matrix of , where if and are connected, otherwise is set equal to zero.\nThe graph Laplacian , where is the identity matrix, and is the diagonal degree matrix with .\nKNN algorithm is employed to construct the cell graph and each node in the graph represents a cell [22 ###reference_b22###]."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Chebyshev Graph Convolution",
33
+ "text": "Chebyshev graph convolution (ChebConv) is a variant of graph convolutional networks that uses Chebyshev polynomials to approximate the feature decomposition of graph Laplacian matrices, thereby achieving convolution operations on graph data. The theoretical foundation of ChebConv is graph signal processing and spectrogram theory, which introduces the concept of graph signal processing into graph convolutional networks. The ChebConv layer is defined as follows:\nwhere represents the order of Chebyshev polynomials used to approximate graph convolution kernels. is the layer\u2019s trainable parameter and is computed recursively by:\nwhere denotes the scaled and normalized Laplacian . is the largest eigenvalue of and is the identity matrix.\nCompared with basic GCN, ChebConv effectively reduces the model\u2019s parameter count and computational complexity by transforming graph convolution operations into approximations of Chebyshev polynomials, while maintaining its ability to capture graph structures."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "IV Proposed Approach",
39
+ "text": "In this section,\nwe firstly present our idea of multi-criteria ChebConv graph autoencoder.\nSecondly, we introduce how the scCLG model parameters can be learned using a meaningful sample order.\nFinally, we elaborate the proposed scRNA-seq data clustering algorithm by combining the above two points."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "IV-A Multi-Criteria ChebConv Graph Autoencoder",
45
+ "text": "As shown in Fig. 2 ###reference_###, to capture the cell graph structure and node relationships, we developed a variant of the graph convolution autoencoder that uses a stacked topology Chebyshev graph convolutional network as the graph encoder. Compared with basic GCN, ChebConv effectively reduces the model\u2019s parameter count and computational complexity by transforming graph convolution operations into approximations of Chebyshev polynomials.\nWe use three different criterias to map the encoded compressed vector from different perspectives and jointly optimize the modeling ability of the autoencoder.\nThe gene expression matrix and normalized adjacency matrix are used inputs.\nThrough the graph encoder, the feature dimension of each node will be compressed to a smaller size, and the compressed vector features will be decoded by three loss components: reconstruction loss (), ZINB loss (), and clustering loss (). These criterias share encoder parameters to decompose an optimization objective into three optimization objectives for better capturing the cell-cell relationship:\n###figure_2### More detailed optimization information about , and is shown below."
46
+ },
47
+ {
48
+ "section_id": "4.1.1",
49
+ "parent_section_id": "4.1",
50
+ "section_name": "IV-A1 Reconstruction Loss",
51
+ "text": "Given that the majority of the structure and information inherent in the scRNA-seq data is conserved within the latent embedded representation generated by the scCLG encoder.\nThe adjacency matrix decoder of the graph autoencoder can be defined as the inner product between the latent embedding:\nwhere, represents the scCLG encoder function; is the reconstructed adjacency matrix. Therefore, the reconstruction loss of and should be minimized in the learning process as below:"
52
+ },
53
+ {
54
+ "section_id": "4.1.2",
55
+ "parent_section_id": "4.1",
56
+ "section_name": "IV-A2 ZINB Loss",
57
+ "text": "In order to more effectively capture the structure of scRNA-seq data by decoding from the latent embedded representation , we integrate the ZINB model into a Chebyshev graph convolutional autoencoder to capture the global probability structure of the scRNA-seq data.\nBased on this foundation, we propose to apply the ZINB distribution model to simulate the data distribution to capture the characters of scRNA-seq data as follows:\nwhere and are the mean and dispersion in the negative binomial distribution, respectively. is the weight of the point mass at zero. The proportion replaces the probability p. After that, to model the ZINB distribution, the decoder network has three output layers to compute the three sets of parameters. The estimated parameters can be defined as follows:\nwhere is a three-layer fully connected neural network with hidden layers of 128, 256 and 512 nodes. represents the learned weights parameter matrices. and are parameters denoting the estimations of and , respectively. The selection of the activation function depends on the range and definition of the parameters. In terms of the parameter , the suitable activation function for it is sigmoid because the interval of is between 0 and 1. Due to the non-negative value of the mean and dispersion , we choose the exponential activation function for them. The negative log-likelihood of the ZINB distribution can be used as the reconstruction loss function of the original data , which can be defined as below:"
58
+ },
59
+ {
60
+ "section_id": "4.1.3",
61
+ "parent_section_id": "4.1",
62
+ "section_name": "IV-A3 Clustering Loss",
63
+ "text": "scRNA-seq clustering clustering as an unsupervised learning task, lacks guidance from labels, which makes it difficult to capture effective optimization signals during the training phase. To overcome this challenge, we apply a clustering module to guide the algorithm to adjust the cluster centers to ensure that the distribution of samples within each cluster is as consistent as possible while minimizing inter-cluster differences. The objective takes the form of Kullback\u2013Leibler (KL) divergence and is formulated as follows:\nwhere is the soft label of the embedding node which is defined as the similarity between and cluster centre measured by Student\u2019s t-distribution.\nThis can be described as follows:\nMeanwhile, is the auxiliary target distribution, which puts more emphasis on the similar data points assigned with high confidence on the basis of , as below:\nSince the target distribution is defined based on , the embedding learning of is supervised in a self-optimizing way to enable it to be close to the target distribution ."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-B Curriculum Learning with Data Pruning",
69
+ "text": "In this subsection, we first describe the proposed difficulty measurement method from both local and global perspectives and assign a difficulty score to each cell. Based on the difficulty score, we investigate the impact of nodes with higher difficulty on model optimization."
70
+ },
71
+ {
72
+ "section_id": "4.2.1",
73
+ "parent_section_id": "4.2",
74
+ "section_name": "IV-B1 Hierarchical Difficulty Measurer",
75
+ "text": "Our Hierarchical Difficulty Measurer consists of two difficulty measures from different perspectives. In this section, we present the definition of two difficulty measures and how to calculate them.\nLocal Difficulty Measurer.\nWe introduce how to identify difficult nodes from a local perspective.\nNodes located at the boundaries of multiple classes may reside in transitional regions within the feature space, leading to less distinct or consistent feature representations, thereby increasing the difficulty of classification. The first type of difficult node should have diverse neighbors that belong to multiple classes.\nIntuitively, features of nodes within the same class tend to be more similar. This is due to the influence of neighboring node features, resulting in nodes with similar connectivity patterns exhibiting comparable feature representations. In order to identify these difficult nodes, we calculate the diversity of the neighborhood\u2019s features:\nwhere denotes the similarity between cell and cell . A larger indicates a more diverse neighborhood. is the neighborhood of cell . As a result, during neighborhood aggregation, these nodes aggregate neighbors\u2019 features to get an unclear representation, making them difficult for GNNs to learn. By paying less attention to these difficult nodes, scCLG learns more useful information and effectively improves the accuracy of backbone GNNs.\nGlobal Difficulty Measurer.\nThen we introduce how to identify difficult nodes from a global perspective. Entropy plays a pivotal role in feature selection as a metric from information theory used to quantify uncertainty. In the process of feature selection, we leverage entropy to assess a feature\u2019s contribution to the target variable. When a feature better distinguishes between different categories of the target variable, its entropy value tends to be relatively low, signifying that it provides more information and reduces overall uncertainty. Consequently, in feature selection, lower entropy values indicate features that offer greater discriminatory power, aiding in the differentiation of target variable categories. We assume nodes that have lower entropy have fewer contributions to the graph. Therefore, this type of node is difficult to classify. Inspired by Entropy Variation [23 ###reference_b23###], We consider the node contribution as the variation of network entropy before and after its removal.\nFor a node in graph , we define as probabilities:\nwhere .\nThe entropy of the graph is as follows:\nwhere is the degree of node . is the entropy of graph with degree matrix.\nThe global difficulty of the node is as follows:\nwhere is the change if one node and its connections are removed from the network. is the modified graph under the removel of . A lower indicates a lower influence on graph structure and is also more difficult. The global difficulty of node is to subtract the normalized from 1.\nConsidering two difficulty measurers from local and global perspectives, we finally define the difficulty of as:\nwhere is the weight coefficient assigned to each difficulty measurer to control the balance of the total difficulty."
76
+ },
77
+ {
78
+ "section_id": "4.2.2",
79
+ "parent_section_id": "4.2",
80
+ "section_name": "IV-B2 Data Pruning",
81
+ "text": "With the hierarchical difficulty measurer, we can get a list of nodes sorted in ascending order of nodes based on difficulty. The node at the end of the list is a nuisance for the overall model learning, so should it be retained? The sources of noise in graph neural networks can be varied, firstly, the attribute information of the nodes may contain noise, which affects the representation of the node features and hence the learning of the GNN. Secondly, the presence of anomalous data may cause the spectral energy of the graph to be \u201dright-shifted\u201d, the distribution of spectral energy shifts from low to high frequencies. These noises will not only reduce the performance of the graph neural network but also propagate through the GNN in the topology, affecting the prediction results of the whole network. In order to solve this problem, we designed a data pruning strategy based on the calculated node difficulty. Specifically, we define a data discarding hyperparameter . The value of is set while balancing data integrity and model generalization performance.\nAs shown in Fig. 4 ###reference_###, the scRNA-seq clustering performance of the scCLG improves after removing the node features with the highest difficulty which prove our hypothesis."
82
+ },
83
+ {
84
+ "section_id": "4.3",
85
+ "parent_section_id": "4",
86
+ "section_name": "IV-C The Proposed scCLG Algorithm",
87
+ "text": "Our model undergoes a two-phase training process. For the first phase, We pretrain the proposed GNN model ChebAE for discriminative feature learning with an adjacency matrix decoder and ZINB decoder. The number of first phase training rounds is epochs. The output of the encoder is a low dimensional vector which is used to calculate node difficulty using a hierarchical difficulty measurer. We retained the top of the data with high sample quality for subsequent training.\nFor the formal training phase, we use the parameters pretrained and train the model for epochs with pruned data. This phase is the learning of clustering tasks. Unlike the pre-training phase, we use all three criterias to optimize the model in more detail.\nWe use the pacing function mentioned in [19 ###reference_b19###] to generate the size of the nodes subset.\nWe illustrate the detailed information in Algorithm 1 ###reference_###."
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "Experiments",
93
+ "text": ""
94
+ },
95
+ {
96
+ "section_id": "5.1",
97
+ "parent_section_id": "5",
98
+ "section_name": "Setup",
99
+ "text": "Dataset. For the former, we collect 7 scRNA-seq datasets from different organisms. The cell numbers range from 870 to 9519, and the cell type numbers vary from 2 to 9.\nBaselines.\nThe performance of scCLG was compared with two traditional clustering methods (Kmeans and Spectral), and several state-of-the-art scRNA-seq data clustering methods including four single-cell deep embedded clustering methods (scziiDesk, scDC, scDCC and scGMAI) and three single-cell deep graph embedded clustering methods (scTAG, scGAE and scGNN).\nDeep soft K-means clustering with self-training for single-cell RNA sequence data (scziDesk) [12 ###reference_b12###]:\nIt combines a denoising autoencoder to characterize scRNA-seq data while proposing a soft self-training K-means algorithm to cluster the cell population in the learned latent space.\nModel-based deep embedded clustering method (scDC) [14 ###reference_b14###]: It simultaneously learns to feature representation and clustering via explicit modeling of scRNA-seq data generation.\nModel-based deep embedding for constrained clustering analysis of single cell RNA-seq data (scDCC) [13 ###reference_b13###] It integrates prior information into the modeling process to guide our deep learning model to simultaneously learn meaningful and desired latent representations and clusters.\nscGMAI: a Gaussian mixture model for clustering single-cell RNA-Seq data based on deep autoencoder (scGMAI) [24 ###reference_b24###] It utilizes autoencoder networks to reconstruct gene expression values from scRNA-Seq data and FastICA is used to reduce the dimensions of reconstructed data.\nscGNN is a novel graph neural network framework for single-cell RNA-Seq analyses (scGNN) [15 ###reference_b15###]: It integrates three iterative multi-modal autoencoders and models heterogeneous gene expression patterns using a left-truncated mixture Gaussian model.\nA topology-preserving dimensionality reduction method for single-cell RNA-seq data using graph autoencoder (scGAE) [16 ###reference_b16###] It builds a cell graph and uses a multitask\u2011oriented graph autoencoder to preserve topological structure information and feature information in scRNA\u2011seq data simultaneously.\nZinb-based graph embedding autoencoder for single-cell rna-seq interpretations (scTAG) [22 ###reference_b22###] It simultaneously learns cell\u2013cell topology representations and identifies cell clusters based on deep graph convolutional network integrating the ZINB model.\nImplementation Details.\nIn the proposed scCLG method, the cell graph was constructed using the KNN algorithm with the nearest neighbor parameter . In the multi-criterias ChebConv graph autoencoder, the hidden fully connected layers in the ZINB decoder are set at 128, 256 and 512. Our algorithm consists of pre-training and formal training, with 1000 and 500 epochs for pre-training and formal training, respectively. Our model was optimized using the Adam optimizer, employing a learning rate of 5e-4 during pre-training and 1e-4 during formal training. The pruning rate is set to 0.11. For baseline methods, the parameters were set the same as in the original papers."
100
+ },
101
+ {
102
+ "section_id": "5.2",
103
+ "parent_section_id": "5",
104
+ "section_name": "Clustering Result",
105
+ "text": "Table II ###reference_### shows the clustering performance of our method against multiple state-of-the-art methods, and the values highlighted in bold represent the best results. Obviously, our method outperforms other baseline clustering methods for clustering performance. For the 7 scRNA-seq datasets, scCLG achieves the best NMI and ARI on all datasets. Meanwhile, we can observe that the general deep graph embedded models have no advantage and the clustering performance is not stable. Specifically, scGNN performs poorly on \u201dWang_Lung\u201d. The main reason is that the information structure preserved by the cell graph alone cannot address the particularities of scRNA-seq data well, and further data order is necessary, which again proves the superiority of scCLG. The performance of the deep clustering method and traditional clustering method exhibits significant fluctuations across different datasets. However, scCLG still has an advantage. This is because the scCLG could effectively learn the key representations of the scRNA-seq data in a meaningful order so that the model can exhibit a smooth learning trajectory. In summary, we can conclude that scCLG performs better than the other methods under two clustering evaluation metrics."
106
+ },
107
+ {
108
+ "section_id": "5.3",
109
+ "parent_section_id": "5",
110
+ "section_name": "Parameter Analysis",
111
+ "text": "###figure_3###"
112
+ },
113
+ {
114
+ "section_id": "5.3.1",
115
+ "parent_section_id": "5.3",
116
+ "section_name": "V-C1 Different Neighbor Parameter Analysis",
117
+ "text": "represents the number of nearest neighbors to consider when constructing cell graph.\nIn order to investigate the impact of , we ran our model with the parameters 5, 10, 15, 25. Fig. 3 ###reference_### (A) shows the NMI and ARI values with different numbers of . As depicted in Fig. 3 ###reference_### (A), we observe that the two metrics first increase rapidly from parameter 5 to 10, reach the best value at , and then decrease slowly from parameter 20 to 25. Therefore, we set the neighbor parameter k as 20 in our scCLG model."
118
+ },
119
+ {
120
+ "section_id": "5.3.2",
121
+ "parent_section_id": "5.3",
122
+ "section_name": "V-C2 Different Numbers of Variable Genes Analysis",
123
+ "text": "In single-cell data analysis, highly variable genes vary significantly among different cells, which helps to reveal the heterogeneity within the cell population and more accurately identify cell subpopulations.\nTo explore the impact of the number of selected highly variable genes, we apply scCLG on real datasets with gene numbers from 300 to 1500. Fig. 3 ###reference_### (B) shows the line plot of the average NMI and ARI on the 7 datasets selecting 300, 500, 1000 and 1500 genes with high variability, respectively.\nIt can be seen that the performance with 500 highly variable genes is better, while the performance with 300 genes is much worse than the others.\nTherefore, to save computational resources and reduce running time, we set the number of selected high-variance genes in the model to 500.\n###figure_4###"
124
+ },
125
+ {
126
+ "section_id": "5.3.3",
127
+ "parent_section_id": "5.3",
128
+ "section_name": "V-C3 Different Data Pruning Rate Analysis",
129
+ "text": "In single-cell data analysis, data quality can be improved by pruning lower-quality samples thereby affecting the ability to generalize the model.\nTo explore the impact of the selected data, we run our model with pruning rate parameters from 0.06 to 0.21 to drop difficult nodes. We also compared our pruning strategy with two different pruning strategies, namely pruning easy nodes and randomly pruning nodes.\nFig. 4 ###reference_### shows the ARI and NMI values with different numbers of and pruning strategy.\nAs depicted in Fig. 4 ###reference_###, we can observe that the best performance is achieved when the is 0.11 and when difficult nodes are pruned. This indicates that the improvement of data quality can significantly improve the performance of the model. Compared to pruning easy nodes and randomly pruning nodes, pruning difficult nodes brings higher profit because difficult nodes have a negative impact on the representation of the graph. Furthermore, randomly pruning nodes is better than pruning easy nodes, indicating the effectiveness of our difficulty measurer which can assign reasonable difficulty scores to nodes."
130
+ },
131
+ {
132
+ "section_id": "5.4",
133
+ "parent_section_id": "5",
134
+ "section_name": "Ablation Study",
135
+ "text": "In this experiment, we analyzed the effect of each component of the scCLG method. Specifically, we ablated different components in no hierarchical difficulty measurer named Without CL. Table III ###reference_### tabulates the average ARI and NMI values on the 7 datasets with scCLG. As shown in Table III ###reference_###, it can be clearly observed that gene screening and extraction of scRNA-seq data from easy to hard patterns improves the clustering performance. For the 7 scRNA-seq datasets, scCLG achieve the best ARI and NMI on 5 of them.\nIn summary, all components of the scCLG method are reasonable and effective."
136
+ },
137
+ {
138
+ "section_id": "6",
139
+ "parent_section_id": null,
140
+ "section_name": "VI Conclusion",
141
+ "text": "In this research, we propose a single-cell curriculum learning-based deep graph embedding clustering.\nOur approach first utilizes the Chebyshev graph convolutional autoencoder to learn the low-dimensional feature representation which preserves the cell\u2013cell topological structure. Then we define two types of difficult nodes and rank the nodes in the graph based on the measured difficulty to train them in a meaningful manner. Meanwhile, we prune the difficult node to keep the high quality of node features.\nOur method shows higher clustering performance against state-of-the-art approaches for scRNA-seq data. Empirical results provide strong evidence that this performance is imputed to the proposed mechanisms and particularly their ability to tackle the difficult nodes."
142
+ }
143
+ ],
144
+ "appendix": [],
145
+ "tables": {
146
+ "1": {
147
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Summary of the real scRNA-seq datasets.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.1.1\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T1.1.1.1.1.1\" style=\"width:21.7pt;\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.1.2\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T1.1.1.1.2.1\" style=\"width:21.7pt;\">Cells</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.1.3\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T1.1.1.1.3.1\" style=\"width:21.7pt;\">Genes</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.1.4\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T1.1.1.1.4.1\" style=\"width:21.7pt;\">Class</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.1.1.1.5\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T1.1.1.1.5.1\" style=\"width:21.7pt;\">Platform</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.1.1\">QS_Diaphragm</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.1.2\">870</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.1.3\">23341</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.1.4\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.1.5\">Smart-seq2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.3.2.1\">QS_Limb_Muscle</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.3.2.2\">1090</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.3.2.3\">23341</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.3.2.4\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.3.2.5\">Smart-seq2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.3.1\">QS_Lung</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.3.2\">1676</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.3.3\">23341</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.3.4\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.3.5\">Smart-seq2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.5.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.4.1\">Muraro</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.4.2\">2122</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.4.3\">19046</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.4.4\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.4.5\">CEL-seq2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.6.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.5.1\">QS_Heart</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.5.2\">4365</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.5.3\">23341</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.5.4\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.5.5\">Smart-seq2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.7.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.6.1\">Plasschaert</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.6.2\">6977</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.6.3\">28205</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.6.4\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.7.6.5\">inDrop</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.8.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.8.7.1\">Wang_Lung</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.8.7.2\">9519</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.8.7.3\">14561</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.8.7.4\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.1.8.7.5\">10x</td>\n</tr>\n</tbody>\n</table>\n</figure>",
148
+ "capture": "TABLE I: Summary of the real scRNA-seq datasets."
149
+ },
150
+ "2": {
151
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Performance comparison between various baselines on seven real datasets.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T2.1.1.1.1\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.1.1\" style=\"width:21.7pt;\">Metric</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T2.1.1.1.2\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.2.1\" style=\"width:21.7pt;\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.3\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.3.1\" style=\"width:21.7pt;\">scCLG</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.4\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.4.1\" style=\"width:21.7pt;\">scTAG</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.5\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.5.1\" style=\"width:21.7pt;\">scGAE</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.6\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.6.1\" style=\"width:21.7pt;\">scGNN</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.7\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.7.1\" style=\"width:21.7pt;\">scziDesk</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.8\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.8.1\" style=\"width:21.7pt;\">scDC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.9\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.9.1\" style=\"width:21.7pt;\">scDCC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.10\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.10.1\" style=\"width:21.7pt;\">scGMAI</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.11\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.11.1\" style=\"width:21.7pt;\">Kmeans</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.1.1.1.12\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T2.1.1.1.12.1\" style=\"width:21.7pt;\">Spectral</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T2.1.2.1.1\" rowspan=\"7\"><span class=\"ltx_text\" id=\"S5.T2.1.2.1.1.1\">ARI</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T2.1.2.1.2\">QS_Diaphragm</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.2.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.2.1.3.1\">0.9836</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.2.1.4\">0.9628</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.2.1.5\">0.5638</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.2.1.6\">0.5646</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.2.1.7\">0.9517</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.2.1.8\">0.6479</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.2.1.9\">0.8895</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.2.1.10\">0.4111</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.2.1.11\">0.9110</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.1.2.1.12\">0.9170</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.3.2.1\">QS_Limb_Muscle</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.3.2.2.1\">0.9828</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.3\">0.9813</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.4\">0.5419</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.5\">0.6399</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.6\">0.9743</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.7\">0.5384</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.8\">0.3449</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.9\">0.4899</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.10\">0.8922</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.3.2.11\">0.9615</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.4.3.1\">QS_Lung</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.4.3.2.1\">0.7946</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.3\">0.6526</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.4\">0.2797</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.5\">0.3631</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.6\">0.7401</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.7\">0.4504</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.8\">0.2908</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.9\">0.4622</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.10\">0.7329</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.4.3.11\">0.7559</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.5.4.1\">Muraro</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.5.4.2.1\">0.8959</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.4.3\">0.8878</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.4.4\">0.6413</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.4.5\">0.5080</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.4.6\">0.6784</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.4.7\">0.6609</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.4.8\">0.7100</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.4.9\">0.5132</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.4.10\">0.8452</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.5.4.11\">0.8741</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.6.5.1\">QS_Heart</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.6.5.2.1\">0.9503</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.5.3\">0.9371</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.5.4\">0.2497</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.5.5\">0.5222</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.5.6\">0.9324</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.5.7\">0.4673</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.5.8\">0.2584</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.5.9\">0.4368</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.5.10\">0.8376</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.6.5.11\">0.8757</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.7.6.1\">Plasschaert</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.7.6.2.1\">0.7907</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.6.3\">0.7697</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.6.4\">0.3540</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.6.5\">0.4272</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.6.6\">0.4867</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.6.7\">0.4070</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.6.8\">0.4668</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.6.9\">0.5711</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.6.10\">0.7352</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.7.6.11\">0.2916</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.8.7.1\">Wang_Lung</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.8.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.8.7.2.1\">0.9527</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.8.7.3\">0.9004</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.8.7.4\">0.1035</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.8.7.5\">0.1771</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.8.7.6\">0.8975</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.8.7.7\">0.2520</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.8.7.8\">0.5998</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.8.7.9\">0.1325</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.8.7.10\">0.7995</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.8.7.11\">0.0345</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.9.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T2.1.9.8.1\" rowspan=\"7\"><span class=\"ltx_text\" id=\"S5.T2.1.9.8.1.1\">NMI</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T2.1.9.8.2\">QS_Diaphragm</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.9.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.9.8.3.1\">0.9670</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.9.8.4\">0.9346</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.9.8.5\">0.7351</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.9.8.6\">0.7608</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.9.8.7\">0.9210</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.9.8.8\">0.7807</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.9.8.9\">0.8223</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.9.8.10\">0.6836</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.9.8.11\">0.8846</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.9.8.12\">0.8881</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.10.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.10.9.1\">QS_Limb_Muscle</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.10.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.10.9.2.1\">0.9682</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.10.9.3\">0.9616</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.10.9.4\">0.7398</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.10.9.5\">0.7726</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.10.9.6\">0.9468</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.10.9.7\">0.7048</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.10.9.8\">0.4624</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.10.9.9\">0.7198</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.10.9.10\">0.8911</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.10.9.11\">0.9389</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.11.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.11.10.1\">QS_Lung</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.11.10.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.11.10.2.1\">0.8318</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.11.10.3\">0.8038</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.11.10.4\">0.6766</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.11.10.5\">0.6642</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.11.10.6\">0.7543</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.11.10.7\">0.6840</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.11.10.8\">0.4982</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.11.10.9\">0.7312</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.11.10.10\">0.7785</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.11.10.11\">0.7976</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.12.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.12.11.1\">Muraro</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.12.11.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.12.11.2.1\">0.8506</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.12.11.3\">0.8399</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.12.11.4\">0.7619</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.12.11.5\">0.6294</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.12.11.6\">0.7349</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.12.11.7\">0.7549</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.12.11.8\">0.8347</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.12.11.9\">0.7168</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.12.11.10\">0.8194</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.12.11.11\">0.8291</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.13.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.13.12.1\">QS_Heart</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.13.12.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.13.12.2.1\">0.9064</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.13.12.3\">0.8857</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.13.12.4\">0.6039</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.13.12.5\">0.6540</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.13.12.6\">0.8723</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.13.12.7\">0.6531</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.13.12.8\">0.4242</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.13.12.9\">0.6941</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.13.12.10\">0.8299</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.13.12.11\">0.8454</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.14.13\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T2.1.14.13.1\">Plasschaert</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.14.13.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.14.13.2.1\">0.7696</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.14.13.3\">0.7379</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.14.13.4\">0.5563</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.14.13.5\">0.5856</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.14.13.6\">0.6469</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.14.13.7\">0.6122</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.14.13.8\">0.5786</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.14.13.9\">0.5711</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.14.13.10\">0.6915</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.1.14.13.11\">0.5216</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.15.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S5.T2.1.15.14.1\">Wang_Lung</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.15.14.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.15.14.2.1\">0.8942</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.15.14.3\">0.8210</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.15.14.4\">0.3150</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.15.14.5\">0.3975</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.15.14.6\">0.7965</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.15.14.7\">0.1511</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.15.14.8\">0.5862</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.15.14.9\">0.3432</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.15.14.10\">0.7167</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.1.15.14.11\">0.0367</td>\n</tr>\n</tbody>\n</table>\n</figure>",
152
+ "capture": "TABLE II: Performance comparison between various baselines on seven real datasets."
153
+ },
154
+ "3": {
155
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Ablation study measured by ARI and NMI values.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.1.1\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T3.1.1.1.1.1\" style=\"width:21.7pt;\">Metric</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.1.2\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T3.1.1.1.2.1\" style=\"width:43.4pt;\">Methods</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.1.1.1.3\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T3.1.1.1.3.1\" style=\"width:21.7pt;\">scCLG</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.1.1.1.4\"><span class=\"ltx_text ltx_inline-block\" id=\"S5.T3.1.1.1.4.1\" style=\"width:43.4pt;\">Without CL</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T3.1.2.1.1\" rowspan=\"7\"><span class=\"ltx_text\" id=\"S5.T3.1.2.1.1.1\">ARI</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T3.1.2.1.2\">QS_Diaphragm</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.1.2.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.2.1.3.1\">0.9836</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.1.2.1.4\">0.9778</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.3.2.1\">QS_Limb_Muscle</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.3.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.3.2.2.1\">0.9828</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.3.2.3\">0.9791</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.4.3.1\">QS_Lung</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.3.2\">0.7946</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.4.3.3.1\">0.7947</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.5.4.1\">Muraro</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.5.4.2.1\">0.8959</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.5.4.3\">0.8897</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.6.5.1\">QS_Heart</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.6.5.2\">0.9503</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.6.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.6.5.3.1\">0.9530</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.7.6.1\">Plasschaert</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.7.6.2.1\">0.7907</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.7.6.3\">0.7903</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.8.7.1\">Wang_Lung</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.8.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.8.7.2.1\">0.9527</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.8.7.3\">0.9527</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.9.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T3.1.9.8.1\" rowspan=\"7\"><span class=\"ltx_text\" id=\"S5.T3.1.9.8.1.1\">NMI</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.9.8.2\">QS_Diaphragm</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.9.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.9.8.3.1\">0.9670</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.9.8.4\">0.9579</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.10.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.10.9.1\">QS_Limb_Muscle</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.10.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.10.9.2.1\">0.9682</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.10.9.3\">0.9613</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.11.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.11.10.1\">QS_Lung</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.11.10.2\">0.8318</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.11.10.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.11.10.3.1\">0.8321</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.12.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.12.11.1\">Muraro</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.12.11.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.12.11.2.1\">0.8506</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.12.11.3\">0.8468</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.13.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.13.12.1\">QS_Heart</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.13.12.2\">0.9064</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.13.12.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.13.12.3.1\">0.9088</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.14.13\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.14.13.1\">Plasschaert</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.14.13.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.14.13.2.1\">0.7696</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.14.13.3\">0.7693</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.15.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S5.T3.1.15.14.1\">Wang_Lung</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.15.14.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.15.14.2.1\">0.8942</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.15.14.3\">0.8942</td>\n</tr>\n</tbody>\n</table>\n</figure>",
156
+ "capture": "TABLE III: Ablation study measured by ARI and NMI values."
157
+ }
158
+ },
159
+ "image_paths": {
160
+ "1": {
161
+ "figure_path": "2408.10511v3_figure_1.png",
162
+ "caption": "Figure 1: Framework of scCLG. (A) Pre-training: pretraining the proposed ChebAE with adjacency matrix decoder and ZINB decoder. Then calculate node difficulty using a hierarchical difficulty measurer and prune the data. (B) Formal training: using all three criterias to optimize the model in more detail from easy to hard pattern with pruned data.",
163
+ "url": "http://arxiv.org/html/2408.10511v3/x1.png"
164
+ },
165
+ "2": {
166
+ "figure_path": "2408.10511v3_figure_2.png",
167
+ "caption": "Figure 2: The model architecture of multi-criteria ChebAE. ChebAE integrates three loss components: reconstruction loss, ZINB loss, and a clustering loss to optimize the low-dimensional latent representation.",
168
+ "url": "http://arxiv.org/html/2408.10511v3/x2.png"
169
+ },
170
+ "3": {
171
+ "figure_path": "2408.10511v3_figure_3.png",
172
+ "caption": "Figure 3: Parameter analysis. (A) Comparison of the average ARI and NMI values with different neighbor parameters k\ud835\udc58kitalic_k. (B) Comparison of the average ARI and NMI values with different numbers of genes.",
173
+ "url": "http://arxiv.org/html/2408.10511v3/x3.png"
174
+ },
175
+ "4": {
176
+ "figure_path": "2408.10511v3_figure_4.png",
177
+ "caption": "Figure 4: Comparison of the average ARI and NMI values with different data pruning rates and pruning strategies.",
178
+ "url": "http://arxiv.org/html/2408.10511v3/x4.png"
179
+ }
180
+ },
181
+ "validation": true,
182
+ "references": [],
183
+ "url": "http://arxiv.org/html/2408.10511v3"
184
+ }
20241127/2408.11841v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2408.12957v3.json ADDED
@@ -0,0 +1,320 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Image Segmentation in Foundation Model Era: A Survey",
3
+ "abstract": "Image segmentation is a long-standing challenge in computer vision, studied continuously over several decades, as evidenced by seminal algorithms such as N-Cut, FCN, and MaskFormer. With the advent of foundation models (FMs), contemporary segmentation methodologies have embarked on a new epoch by either adapting FMs (e.g., CLIP, Stable Diffusion, DINO) for image segmentation or developing dedicated segmentation foundation models (e.g., SAM, SAM2). These approaches not only deliver superior segmentation performance, but also herald newfound segmentation capabilities previously unseen in deep learning context. However, current research in image segmentation lacks a detailed analysis of distinct characteristics, challenges, and solutions associated with these advancements. This survey seeks to fill this gap by providing a thorough review of cutting-edge research centered around FM-driven image segmentation. We investigate two basic lines of research \u2013 generic image segmentation (i.e., semantic segmentation, instance segmentation, panoptic segmentation), and promptable image segmentation (i.e., interactive segmentation, referring segmentation, few-shot segmentation) \u2013 by delineating their respective task settings, background concepts, and key challenges. Furthermore, we provide insights into the emergence of segmentation knowledge from FMs like CLIP, Stable Diffusion, and DINO. An exhaustive overview of over 300 segmentation approaches is provided to encapsulate the breadth of current research efforts. Subsequently, we engage in a discussion of open issues and potential avenues for future research. We envisage that this fresh, comprehensive, and systematic survey catalyzes the evolution of advanced image segmentation systems. A public website is created to continuously track developments in this fast advancing field: https://github.com/stanley-313/ImageSegFM-Survey.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Image segmentation has been, and still is, an important and challenging research field in computer vision, with its aim to partition pixels into distinct groups. It constitutes an initial step in achieving higher-order goals including physical scene understanding, reasoning over visual commonsense, perceiving social affordances, and has widespread applications in domains like autonomous driving, medical image analysis, automated surveillance, and image editing.\nThe task has garnered extensive attention over decades, resulting in a plethora of algorithms in the literature, ranging from traditional, non-deep learning methods such as thresholding [1 ###reference_b1###, 2 ###reference_b2###], histogram mode seeking [3 ###reference_b3###, 4 ###reference_b4###], region growing and merging [5 ###reference_b5###, 6 ###reference_b6###], spatial clustering [7 ###reference_b7###], energy diffusion [8 ###reference_b8###], superpixels [9 ###reference_b9###], conditional and Markov random fields [10 ###reference_b10###],\nto more advanced, deep learning methods, e.g., FCN-based [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###] and particularly the DeepLab family [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###], RNN-based [21 ###reference_b21###], Transformer-based [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###], and the R-CNN family [29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###]. These approaches have shown remarkable performance and robustness across all critical segmentation fields, e.g., semantic, instance, and panoptic segmentation. Yet, the exploration of image segmentation continues beyond these advancements.\nFoundation Models (FMs) [32 ###reference_b32###] have emerged as transformative technologies in recent years, reshaping our understanding of core domains in artificial intelligence (AI) including natural language processing [33 ###reference_b33###], computer vision [34 ###reference_b34###], and many other interdisciplinary areas [35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###]. Notable examples include large language models (LLMs) like GPT-3 [38 ###reference_b38###] and GPT-4 [39 ###reference_b39###], multimodal large language models (MLLMs) like Flamingo [40 ###reference_b40###] and Gemini [41 ###reference_b41###], and diffusion models (DMs) like Sora [42 ###reference_b42###] and Stable Diffusion (SD) [43 ###reference_b43###]. These models, distinguished by their immense scale and complexity, have exhibited emergent capabilities [44 ###reference_b44###, 45 ###reference_b45###] to tackle a wide array of intricate tasks with notable efficacy and efficiency. Meanwhile, they have unlocked new possibilities, such as generating chains of reasoning [46 ###reference_b46###], offering human-like responses in dialogue scenarios [38 ###reference_b38###], creating realistic-looking videos [42 ###reference_b42###], and synthesizing novel programs [47 ###reference_b47###]. The advent of GPT-4 and Sora has sparked considerable excitement within the AI community to fulfill artificial general intelligence (AGI) [48 ###reference_b48###].\nIn the era dominated by FMs, image segmentation has undergone significant evolution, marked by distinct features uncommon in the preceding research era. To underscore the motivation behind our survey, we highlight several characteristics exemplifying this transformation:\nFM technology has led to the emergence of segmentation generalists. Unlike traditional frameworks (e.g., FCN, Mask R-CNN), contemporary segmentation models have become promptable, i.e., generate a mask (akin to an answer in LLMs) based on a handcrafted prompt specifying what to segment in an image. The LLM-like promptable interface leads to a significant enhancement of task generality of segmentors, enabling them to rapidly adapt to various existing and new segmentation tasks, in a zero-shot (e.g., SAM [49 ###reference_b49###], SEEM [50 ###reference_b50###]) or few-shot (e.g., SegGPT [51 ###reference_b51###]) manner. Note that these promptable models markedly differ from earlier universal models [23 ###reference_b23###, 24 ###reference_b24###, 22 ###reference_b22###, 25 ###reference_b25###], which remain limited to a fixed set of predetermined tasks, e.g., joint semantic, instance, and panoptic segmentation, with a closed vocabulary.\nTraining-free segmentation has recently emerged as a burgeoning research area [52 ###reference_b52###, 53 ###reference_b53###, 54 ###reference_b54###, 55 ###reference_b55###, 56 ###reference_b56###, 57 ###reference_b57###]. It aims to extract segmentation knowledge from pre-trained FMs, marking a departure from established learning paradigms, such as\nsupervised, semi-supervised, weakly supervised, and self-supervised learning. Recent studies highlight that segmentation masks can be derived effortlessly from attention maps or internal representations within models like CLIP, Stable Diffusion or DINO/DINOv2, even though they were not originally designed for segmentation purposes.\nThere is a notable trend towards integrating LLMs into segmentation systems to harness their reasoning capabilities and world knowledge [58 ###reference_b58###, 59 ###reference_b59###, 60 ###reference_b60###, 61 ###reference_b61###]. The LLM-powered segmentors possess the capacity to read, listen, and even reason to ground real-world, abstract linguistic queries into specific pixel regions. While previous efforts have explored similar capabilities in tasks such as referring segmentation [62 ###reference_b62###], these methods are limited in handling basic queries like \u201cthe front-runner\u201d. In contrast, LLM-powered segmentors can adeptly manage more complicated queries like \u201cwho will win the race?\u201d. This capability represents a notable advancement towards developing more intelligent vision systems.\nGenerative models, particularly text-to-image diffusion models, garner increasing attention in recent image segmentation research. It has been observed that DMs implicitly learn meaningful object groupings and semantics during the text-to-image generation process [63 ###reference_b63###], functioning as strong unsupervised representation learners. This motivates a stream of works to directly decode the latent code of pre-trained DMs into segmentation masks, in either a label-efficient or completely unsupervised manner [63 ###reference_b63###, 64 ###reference_b64###]. Moreover, some efforts extend the inherent denoising diffusion process in DMs to segmentation, by approaching image segmentation from an image-conditioned mask generation perspective [65 ###reference_b65###, 66 ###reference_b66###, 67 ###reference_b67###].\nIn light of these features, we found that most existing surveys in the field [68 ###reference_b68###, 69 ###reference_b69###, 70 ###reference_b70###] are now outdated \u2013 one of the latest surveys [70 ###reference_b70###] was published in 2021 and focuses only on semantic and instance segmentation. This leaves a notable gap in capturing recent FM-based approaches.\nOur Contributions. To fill the gap, we offer an exhaustive and timely overview to examine how foundation models are transforming the field of image segmentation.\nThis survey marks the first comprehensive exploration of recent image segmentation approaches that are built upon famous FMs, such as CLIP [71 ###reference_b71###], Stable Diffusion [43 ###reference_b43###], DINO [56 ###reference_b56###]/DINOv2 [57 ###reference_b57###], SAM [49 ###reference_b49###] and LLMs/MLLMs [72 ###reference_b72###]. It spans the breadth of the field and delves into the nuances of individual methods, thereby providing the reader with a thorough and up-to-date understanding of this topic. Beyond this, we elucidate open questions and potential directions to illuminate the path forward in this key field.\n###figure_1### ###figure_2### \u00a71 ###reference_###\u00a72 ###reference_###\u00a72.1 ###reference_###\u00a72.2 ###reference_###\u00a73 ###reference_###\u00a73.1 ###reference_###\u00a73.2 ###reference_###\u00a73.3 ###reference_###\u00a74 ###reference_###\u00a74.1 ###reference_###\u00a74.2 ###reference_###\u00a74.3 ###reference_###\u00a75 ###reference_###\u00a75.1 ###reference_###\u00a75.2 ###reference_###\u00a75.3 ###reference_###\u00a76 ###reference_###\u00a77 ###reference_###\nRelated Surveys and Differences. In the past decade, many surveys have studied image segmentation from various perspectives. For example, [73 ###reference_b73###] reviews region- and boundary-based segmentation methods in 2015. With the transition to the deep learning era, a series of works [74 ###reference_b74###, 70 ###reference_b70###, 75 ###reference_b75###, 76 ###reference_b76###, 77 ###reference_b77###, 78 ###reference_b78###] has summarized progress in generic segmentation tasks like semantic, instance and panoptic segmentation. A recent study [79 ###reference_b79###] focuses on the specific task of open-vocabulary segmentation, while [80 ###reference_b80###] only studies Transformer-based segmentation. Other studies delve into crucial aspects of image segmentation, such as evaluation protocols [81 ###reference_b81###] or loss functions [82 ###reference_b82###]. In addition, there exist surveys that focus on segmentation techniques in specialized domains, e.g., video [83 ###reference_b83###], medical imaging [84 ###reference_b84###, 85 ###reference_b85###].\nGiven the accelerated evolution of FMs, there has been a surge of surveys that elucidate the fundamental principles and pioneering efforts in LLMs [33 ###reference_b33###], MLLMs [72 ###reference_b72###], DMs [86 ###reference_b86###]. However, conspicuously absent from these works is a discussion on the role of FMs in advancing image segmentation.\nThe survey most relevant to ours is [87 ###reference_b87###], which offers an extensive review of recent developments related to SAM [49 ###reference_b49###]. SAM represents a groundbreaking contribution to the segmentation field, making [87 ###reference_b87###] a valuable resource. However, within the broader context of FMs, SAM is just one among many; thus, the scope of [87 ###reference_b87###] is still limited in encompassing the entirety of progress in segmentation field.\nUnlike prior surveys, our work stands apart in its exclusive focus on the contributions of FMs to image segmentation, and fills an existing gap in the current research landscape. We document the latest techniques, and spotlight major trends, and envision prospective research trajectories which will aid researchers in staying abreast of advances in image segmentation and accelerate progress in the field.\nSurvey Organization. Fig. 2 ###reference_### shows the structure of this survey. Section \u00a72 ###reference_### presents essential background on image segmentation and FMs. \u00a73 ###reference_### highlights the emergency of segmentation knowledge from existing FMs. \u00a74 ###reference_### and \u00a75 ###reference_### review the most important FM-based image segmentation methods, mainly from the past three years. \u00a76 ###reference_### raises open issues and future directions. We conclude the paper in \u00a77 ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Background",
15
+ "text": "In this section, we first present a unified formulation of image segmentation tasks and categorize research directions in \u00a72.1 ###reference_###. Then, we provide a concise background overview of prominent FMs in \u00a72.2 ###reference_###."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Image Segmentation",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "2.1.1",
25
+ "parent_section_id": "2.1",
26
+ "section_name": "2.1.1 A Unified Formulation",
27
+ "text": "The central goal of the paper is to investigate the contributions of FMs to modern image segmentation technology. To this end, we first introduce a unified mathematical formulation applicable to various segmentation tasks. Concretely, denote and as the input space and output segmentation space, respectively. An image segmentation solution seeks to learn an ideal mapping function :\nHere is typically instantiated as a neural network. The input space is decomposed as , where represents an image domain (comprising solely a single image ), and refers to a collection of prompts, which is exclusively employed in certain segmentation tasks. The output space is , which encompasses a set of segmentation mask and a vocabulary of semantic categories associated with these masks. Eq. 1 ###reference_### furnishes a structured framework for understanding image segmentation, wherein a neural network is trained to map an input image, along with potentially user-specified prompts, to segmentation masks as well as corresponding semantic categories. Based on Eq. 1 ###reference_###, we subsequently build a taxonomy for image segmentation."
28
+ },
29
+ {
30
+ "section_id": "2.1.2",
31
+ "parent_section_id": "2.1",
32
+ "section_name": "2.1.2 Image Segmentation Category",
33
+ "text": "According to whether is provided, we categorize image segmentation methods into two classes (Fig. 1 ###reference_###): generic image segmentation (GIS) and promptable image segmentation (PIS).\nGeneric Image Segmentation. GIS aims to segment an image into distinct regions, each associated with a semantic category or an object. In GIS, the input space comprises solely the image, i.e., , indicating . Based on the definition of output space , GIS methods can be further categorized into three major types: (i) semantic segmentation (Fig. 1 ###reference_###a) aims to identify and label each pixel with a semantic class in . (ii) instance segmentation (Fig. 1 ###reference_###b) involves grouping pixels that belong to the same semantic class into separate object instances. (iii) panoptic segmentation (Fig. 1 ###reference_###c) integrates semantic and instance segmentation to predict per-pixel class and instance labels, and is able to provide a comprehensive scene parsing.\nFurthermore, based on whether the testing vocabulary includes novel classes absent from the training vocabulary , the three tasks are studied under two settings: closed-vocabulary (i.e., ) and open-vocabulary (i.e., ) segmentation. Notably, the closed-vocabulary setup has been extensively studied over the past decade. However, its open-vocabulary counterpart is still in its infancy and has garnered attention only in recent years, particularly with the advent of FMs.\nPromptable Image Segmentation. PIS extends GIS by additionally incorporating a set of prompts , specifying the target to segment. In general, PIS methods only generate segmentation masks closely related to the prompts and do not directly predict classes. While the term \u201cprompt\u201d is relatively new, PIS has been studied for many years. Depending upon the prompt type, PIS methods can be grouped into the following categories: (i) interactive segmentation (Fig. 1 ###reference_###d) aims to segment out specific objects or parts according to user input, often provided through clicks, scribbles, boxes, or polygons, thus are visual prompts; (ii) referring segmentation (Fig. 1 ###reference_###e) entails extracting the corresponding region referred by a linguistic phrase, thus refers to textual prompts; (iii) few-shot segmentation (Fig. 1 ###reference_###f) targets at segmenting novel objects in given query image with a few annotated support images, i.e., refers to a collection of image-mask pairs. While great progress has been made in these segmentation challenges, previous studies address various prompt types independently. In sharp contrast, FM-based methods aim to integrate them into a unified framework. Moreover, in-context segmentation has emerged as a novel few-shot segmentation task."
34
+ },
35
+ {
36
+ "section_id": "2.1.3",
37
+ "parent_section_id": "2.1",
38
+ "section_name": "2.1.3 Learning Paradigms for Image Segmentation",
39
+ "text": "Several prevalent learning strategies are employed to approximate the function in Eq. 1 ###reference_###.\n(i) Supervised learning: modern image segmentation methods are generally learned in a fully supervised manner, necessitating a collection of training images and their desired outputs, i.e. per-pixel annotations.\n(ii) Unsupervised learning: in the absence of explicit annotated supervision, the task of approximating falls under unsupervised learning. Most existing unsupervised learning-based image segmentation models utilize self-supervised techniques, training networks with automatically-generated pseudo labels derived from image data.\n(iii) Weakly-supervised learning: in this case, supervision information may be inexact, incomplete or inaccurate. For inexact supervision, labels are typically acquired from a more easily annotated domain (e.g., image tag, bounding box, scribble). In the case of incomplete supervision, labels are provided for only a subset of training images. Inaccurate supervision entails per-pixel annotations for all training images, albeit with the presence of noise.\n(iv) Training free: in addition to the aforementioned strategies, a novel paradigm \u2013 training-free segmentation \u2013 has gained attention in the FM era, aiming to extract segmentation directly from pre-trained FMs, without involving any model training."
40
+ },
41
+ {
42
+ "section_id": "2.2",
43
+ "parent_section_id": "2",
44
+ "section_name": "Foundation Model",
45
+ "text": "FMs are initially elucidated in [32 ###reference_b32###] as \u201cany model that is trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks\u201d. The term \u2018foundation\u2019 is used to underscore critically central and incomplete character of FMs: homogenization of the methodologies across research communities and emergence of new capabilities. While the basic ingredients of the FMs, such as deep neural networks and self-supervised learning, have been around for many years, the paradigm shift towards FMs is significant because the emergence and homogenization allow replacing narrow task-specific models with more generic task-agnostic models that are not strongly tied to a particular task or domain. In the subsequent subsections, we provide a brief review of language (\u00a72.2.1 ###reference_.SSS1###) and vision foundation models (\u00a72.2.2 ###reference_.SSS2###). Notably, we only focus on topics relevant to this survey, and direct interested readers to [88 ###reference_b88###, 33 ###reference_b33###] for more comprehensive discussions."
46
+ },
47
+ {
48
+ "section_id": "2.2.1",
49
+ "parent_section_id": "2.2",
50
+ "section_name": "2.2.1 Language Foundation Model",
51
+ "text": "Large Language Models (LLMs). Language modeling is one of the primary approaches to advancing language intelligence of machines. In general, it aims to model the generative likelihood of word sequences, so as to predict the probabilities of future tokens. In the past two decades, language modeling has evolved from the earliest statistical language models (SLMs) to neural language models (NLMs), then to small-sized pre-trained language models (PLMs), and finally to nowadays LLMs [33 ###reference_b33###]. As enlarged PLMs (in terms of model size, data size and training compute), LLMs not only achieve a significant zero-shot performance improvement (even in some cases matching finetuned models), but also show strong reasoning capabilities across various domains, e.g., code writing [47 ###reference_b47###], math problem solving [89 ###reference_b89###]. A remarkable application of LLMs is ChatGPT, which has attracted widespread attention and transformed the way we interact with AI technology.\nMultimodal Large Language Models (MLLMs). MLLMs [72 ###reference_b72###] are multimodal extensions of LLMs by bringing together the reasoning power of LLMs with the capability to process non-textual modalities (e.g., vision, audio). MLLMs represent the next level of LLMs. On one hand, multimodal perception is a natural way for knowledge acquisition and interaction with the real world, and thus serves as a fundamental component for achieving AGI; on the other hand, the multimodal extension expands the potential of pure language modeling to more complex tasks in, e.g., robotics and autonomous driving."
52
+ },
53
+ {
54
+ "section_id": "2.2.2",
55
+ "parent_section_id": "2.2",
56
+ "section_name": "2.2.2 Visual Foundation Model",
57
+ "text": "Contrastive Language-Image Pre-training (CLIP). CLIP [71 ###reference_b71###] embodies a language-supervised vision model trained on 400M image-text pairs sourced from the Internet. The model has an encoder-only architecture, consisting of separate encoders for image and text encoding. It is trained by an image-text contrastive learning objective:\nwhere denotes the image and text embeddings of the -th image-text example . and indicate the number of examples and softmax temperature, respectively. The loss maximizes agreement between the embeddings of matching image and text pairs while minimizing it for non-matching pairs. In practice, text-image contrastive loss is calculated similarly, and the model is trained by a joint loss: . ALIGN [90 ###reference_b90###] is a follow-up work that harnesses for visual representation learning. It simplifies the costly data curation process in CLIP, and succeeds to further scale up representation learning with a noisy dataset of over one billion image-text pairs. Both CLIP and ALIGN acquire semantically-rich visual concepts and demonstrate impressive transferability in recognizing novel categories, leading to increased adoption for tackling zero-shot and open-vocabulary recognition tasks.\nDiffusion Models (DMs). DMs are a family of generative models that are Markov chains trained with variational inference. They have demonstrated remarkable potential in creating visually realistic samples, and set the current state-of-the-art in generation tasks. The milestone work, denoising diffusion probabilistic model (DDPMs) [91 ###reference_b91###], was published in 2020 and have sparked an exponentially increasing interest in the generative AI community afterwards. DDPMs are defined as a parameterized Markov chain, which generate data from Gaussian noise within finite transitions during inference. Its training encompasses two interconnected processes. (i) Forward pass maps a data distribution to a simpler prior distribution via a diffusion process:\nwhere are fixed coefficients that determine the noise schedule. (ii) Reverse pass leverages a trained neural network (typicall a UNet) to gradually reverse the effects of the forward process by training it to estimate the noise which has been added to . Hence, the training objective can be derived as:\nwhere denotes an additional conditioning input to . Further, latent diffusion models (LDMs) extend DMs by training them in the low-dimensional latent space of an autoencoding model (e.g., VQGAN [92 ###reference_b92###]):\nThis leads to many popular text-to-image DMs (T2I-DMs), i.e., Stable Diffusion (SD) [43 ###reference_b43###]. Current T2I-DMs are able to generate high-fidelity images with rich texture, diverse content and intricate structures while having compositional and editable semantics. This phenomenon potentially suggests that T2I-DMs can implicitly learn both high-level and low-level visual concepts from massive image-text pairs. Moreover, recent research has highlighted the clear correlations between attention maps and text prompts in T2I-DMs [93 ###reference_b93###, 94 ###reference_b94###]. These properties extend the capability of T2I-DMs from generation to perception tasks [95 ###reference_b95###, 96 ###reference_b96###].\nSelf-Distillation with No Labels (DINO&DINOv2). DINO [56 ###reference_b56###] interprets self-supervised learning of ViTs as a special case of self-distillation, wherein learning relies on model\u2019s own predictions rather than external labels.\nDespite being a relatively small-sized model, DINO demonstrates a profound understanding of the visual world, characterized by its highly structured feature space. Notably, DINO shows two emerging properties: its features are excellent k-NN classifiers, and contain explicit information pertaining to image segmentation. DINOv2 [57 ###reference_b57###] pushes the limits of visual features by scaling DINO in model and data sizes, along with an improved training recipe. The resultant model yields general-purpose features that close the performance gap with supervised alternatives across various benchmarks, while also showing notable properties, such as understanding of object parts and scene geometry. Strictly, speaking, DINO is not a \u2018large\u2019 model in terms of the parameter scale, but it is included due to the emerged nice properties for segmentation, and its role as the successor of DINOv2 .\nSegment Anything (SAM). SAM [49 ###reference_b49###] has sparked a revolution in the field of image segmentation, and profoundly influences the development of large, general-purposed models in computer vision. Unlike the aforementioned vision FMs, SAM is built specifically for image segmentation, which is trained on a corpus of 1 billion masks from 11 million images using a promptable segmentation task. It achieves powerful zero-shot task generality to handle a wide range of image segmentation tasks, and allows for enhanced interactivity in segmentation through the use of \u201cprompts\u201d in forms of points, masks, boxes, and even language. Beyond this, SAM has shown promising capabilities in a multitude of tasks, including medical imaging [97 ###reference_b97###], image editing [98 ###reference_b98###], video segmentation [99 ###reference_b99###]. Despite its capabilities, one downside of SAM is the computational expense associated with its heavy image encoder. However, SAM2 [100 ###reference_b100###] addresses this by instead using an MAE pre-trained Hiera as the image encoder, yielding real-time speed and improved segmentation accuracy."
58
+ },
59
+ {
60
+ "section_id": "3",
61
+ "parent_section_id": null,
62
+ "section_name": "Segmentation Knowledge Emerges from FMs",
63
+ "text": "Given the emergency capabilities of LLMs, a natural question arises: Do segmentation properties emerge from FMs? The answer is positive, even for FMs not explicitly designed for segmentation, such as CLIP, DINO and Diffusion Models. In this section, we elaborate on the techniques to extract segmentation knowledge from these FMs, which are effectively unlocking a new frontier in image segmentation, i.e., acquiring segmentation without any training. Fig. 3 ###reference_### illustrates how to approach this and shows some examples.\n###figure_3###"
64
+ },
65
+ {
66
+ "section_id": "3.1",
67
+ "parent_section_id": "3",
68
+ "section_name": "Segmentation Emerges from CLIP",
69
+ "text": "Many studies [52 ###reference_b52###, 53 ###reference_b53###, 101 ###reference_b101###] acknowledge that the standard CLIP is able to discern the appearance of objects, but is limited in understanding their locations. The main reason is that CLIP learns holistic visual representations that are invariant to spatial positions, whereas segmentation requires spatial-covariant features \u2013 local representations should vary w.r.t. their spatial positions in an image. To better explain this, we revisit self-attention in Transformers:\nwhere , , are query, key, and value embeddings. is the input sequence with tokens, each being a -dimensional vector. denotes a projection matrix whose parameters are learned in pre-training. CLIP applies attention pooling to the last self-attention layer:\nwhere , and is globally average-pooled feature of . Eq. 7 ###reference_### encourages similar representations for different locations, leading to spatial-invariant features.\nDespite this, MaskCLIP [52 ###reference_b52###] finds that it is feasible to extract segmentation knowledge from CLIP with minimal modifications to the attention pooling module. Specifically, it simply sets the attention matrix to an identity matrix. In this way, each local visual token receives information only from its corresponding position so that visual features (i.e., ) are well localized. Such a straightforward modification results in an 11% increase of CLIP\u2019s mIoU on COCO-Stuff. Furthermore, SCLIP [53 ###reference_b53###] proposes to compute pairwise token correlations to allow each local token to attend to positions sharing similar information, i.e., the attention matrix is computed as: . CLIPSurgery [102 ###reference_b102###] computes value-value attention matrix: and incorporates the attention into each Transformer block rather than the last one. NACLIP [103 ###reference_b103###] computes key-key attention matrix: and further weights the attention map with a Gaussian kernel to encourage more consistent attention across adjacent patches. GEM [104 ###reference_b104###] presents a generalized way to calculate the attention matrix as: ."
70
+ },
71
+ {
72
+ "section_id": "3.2",
73
+ "parent_section_id": "3",
74
+ "section_name": "Segmentation Emerges from DMs",
75
+ "text": "A family of methods [55 ###reference_b55###, 105 ###reference_b105###, 106 ###reference_b106###, 107 ###reference_b107###, 108 ###reference_b108###, 109 ###reference_b109###, 110 ###reference_b110###, 111 ###reference_b111###, 112 ###reference_b112###, 113 ###reference_b113###] shows that pre-trained generative models, especially DMs, manifest remarkable segmentation capabilities. A major insight is that segmentation emerges from cross-attention maps in DMs. Formally, the cross-attention at one layer is computed as:\nHere and indicate linear layers of the U-Net that denoise in the latent space. and represent the length of text tokens and feature dimensionality in the layer, respectively. is the spatial size of the feature. denotes the cross-attention map of a single head. As seen, captures dense correlations between pixels and words, from which we are able to extract the mask associated with the class token . In practice, most methods consolidate cross-attention matrices across blocks, timestamps, and attention heads into a single attention map [55 ###reference_b55###, 105 ###reference_b105###, 106 ###reference_b106###, 107 ###reference_b107###, 110 ###reference_b110###] to obtain higher-quality attention maps. Nevertheless, cross-attention maps often lack clear object\nboundaries and may exhibit internal holes. Thus, they are typically completed by incorporating self-attention maps [55 ###reference_b55###, 106 ###reference_b106###] to yield final segmentation mask as where is a self-attention matrix."
76
+ },
77
+ {
78
+ "section_id": "3.3",
79
+ "parent_section_id": "3",
80
+ "section_name": "Segmentation Emerges from DINO",
81
+ "text": "DINO [56 ###reference_b56###] and DINOv2 [57 ###reference_b57###] demonstrate a surprising phenomenon that segmentation knowledge emerges in self-supervised visual transformers, but not appear explicitly in either supervised ViTs or CNNs. Caron et al. show in DINO [56 ###reference_b56###] that sensible object segmentation can be obtained from the self-attention of class token [CLS] in the last attention layer. More formally, given an input sequence of () patches, the affinity vector can be computed as the pairwise similarities between the class token [CLS] and patch tokens [I] in an attention head of the last layer:\nwhere and denote query and key features of corresponding tokens, respectively. The final attention map are averaged of over all attention heads, and can directly binarized to yield segmentation masks.\nBeyond this, some other works [114 ###reference_b114###, 115 ###reference_b115###, 116 ###reference_b116###, 117 ###reference_b117###] localize objects based on similarities between patch tokens:\nHere each element in measures the similarity between a pair of tokens. The key features are typically chosen in the computation since they show better localization properties than others (i.e., query or value features) [118 ###reference_b118###]. Based on the derived affinity matrix , LOST [114 ###reference_b114###] directly mines potential objects based on an inverse selection strategy; DeepSpectral [115 ###reference_b115###] and COMUS [117 ###reference_b117###] group pixels by partitioning the affinity matrix based on spectral theory; MaskDistill [116 ###reference_b116###] selects discriminative tokens based on , and diffuses information of discriminative tokens based on to estimate initial segmentation results.\n###figure_4### \u00a74 ###reference_###\u00a74.1 ###reference_###4.1.1 ###reference_.SSS1###4.1.2 ###reference_.SSS2###4.1.3 ###reference_.SSS3###4.1.4 ###reference_.SSS4###4.1.5 ###reference_.SSS5###\u00a74.2 ###reference_###4.2.1 ###reference_.SSS1###4.2.2 ###reference_.SSS2###4.2.3 ###reference_.SSS3###4.2.4 ###reference_.SSS4###\u00a74.3 ###reference_###4.3.1 ###reference_.SSS1###4.3.2 ###reference_.SSS2###4.3.3 ###reference_.SSS3###4.3.4 ###reference_.SSS4###"
82
+ },
83
+ {
84
+ "section_id": "4",
85
+ "parent_section_id": null,
86
+ "section_name": "Foundation Model based GIS",
87
+ "text": "This section presents a comprehensive review of recent advances in FM-based GIS, including semantic (\u00a74.1 ###reference_###), instance (\u00a74.2 ###reference_###) and panoptic segmentation (\u00a74.3 ###reference_###), as illustrated in Fig. 4 ###reference_###. Our discussions are approached from a technical perspective to elucidate the fundamental concepts and highlight the roles of FMs in GIS."
88
+ },
89
+ {
90
+ "section_id": "4.1",
91
+ "parent_section_id": "4",
92
+ "section_name": "Semantic Segmentation",
93
+ "text": ""
94
+ },
95
+ {
96
+ "section_id": "4.1.1",
97
+ "parent_section_id": "4.1",
98
+ "section_name": "4.1.1 CLIP-based Solution",
99
+ "text": "How to transfer pre-trained knowledge in CLIP to segmentation? This question has driven a wide spectrum of studies to solve image segmentation based on CLIP. However, the task is challenging due to the inherent granularity gap between the image-level training task in CLIP and pixel-level prediction task in image segmentation. Popular solutions are:\nTraining free Semantic Segmentation. As discussed in \u00a73.1 ###reference_###, it is feasible to derive segmentation masks from CLIP, with a slight modification of the self-attention module. On this basis, many approaches [52 ###reference_b52###, 53 ###reference_b53###, 102 ###reference_b102###, 103 ###reference_b103###, 104 ###reference_b104###] achieve semantic segmentation by leveraging CLIP text encoder as the classifier to determine the category of each mask. The whole process involves no extra training or fine-tuning.\nCLIP Finetuning. Following the popular pre-training-then-fine-tuning paradigm, a large number of methods fine-tunes CLIP using segmentation data. They can be categorized as either full fine-tuning or parameter-efficient tuning approaches. Full fine-tuning methods entail tuning the entire visual or textual encoders of CLIP. DenseCLIP [119 ###reference_b119###], for instance, pioneers this approach by refining CLIP\u2019s visual encoder through solving a pixel-text matching task. PPL [120 ###reference_b120###] augments DenseCLIP with a probabilistic framework to learn more accurate textual descriptions based on visual cues. Though showing promising results, these methods tend to break the visual-language association within CLIP and lead to severe losses of the open-vocabulary capacity. To alleviate this, CATSeg [121 ###reference_b121###] introduces a cost aggregation-based framework to maintain the zero-shot capability of CLIP even after full fine-tuning. OTSeg [122 ###reference_b122###] tackles it by leveraging the ensemble of multiple text prompts, and introduce a multi-prompts sinkhorn attention to improve multimodal alignment. However, these methods typically necessitate a substantial volume of densely annotated training images. In contrast, ZegCLIP [123 ###reference_b123###], LDVC [124 ###reference_b124###], and ZegOT [125 ###reference_b125###] employ parameter-efficient prompt tuning techniques to transfer CLIP. To prevent overfitting to seen categories, they all learn image-specific textual embeddings to achieve more accurate pixel-text alignment. SemiVL [126 ###reference_b126###] adopts partial tuning strategies to only tune parameters of self-attention layers. SAN [127 ###reference_b127###] adapts CLIP image encoder to segmentation via a lightweight adapter, and decouples the mask proposal and classification stage by predicting attention biases applied to deeper layers of CLIP for recognition.\nCLIP as Zero-Shot Classifier. Apart from model fine-tuning, many efforts directly utilize the pre-trained CLIP as classifiers, and are able to preserve CLIP\u2019s zero-shot transferability. The methods can be categorized into two major types: mask classification and pixel classification.\nMask classification methods [128 ###reference_b128###, 129 ###reference_b129###, 130 ###reference_b130###, 131 ###reference_b131###, 132 ###reference_b132###, 133 ###reference_b133###, 134 ###reference_b134###, 135 ###reference_b135###, 136 ###reference_b136###] in general follow a two-stage paradigm, wherein class-agnostic mask proposals are firstly extracted and then the pre-trained CLIP is used for classifying the proposals. Pioneering studies [128 ###reference_b128###, 129 ###reference_b129###] require a standalone, CLIP-unaware model for proposal generation, while recent approaches [130 ###reference_b130###, 131 ###reference_b131###, 133 ###reference_b133###, 132 ###reference_b132###, 134 ###reference_b134###] tend to integrate mask generation and classification within a unified framework. All these methods maintain CLIP frozen during training, but the vanilla CLIP is insensitive to different mask proposals, constraining classification performance. OVSeg [135 ###reference_b135###] and MAFT [136 ###reference_b136###] tackle this issue by tuning CLIP during training to make it more mask-aware.\nPixel classification methods [137 ###reference_b137###, 138 ###reference_b138###, 139 ###reference_b139###, 101 ###reference_b101###, 140 ###reference_b140###, 141 ###reference_b141###] employ CLIP to recognize pixels. LSeg [137 ###reference_b137###] achieves this by learning an independent image encoder to align with the original textual encoder in CLIP. Fusioner [138 ###reference_b138###] introduces a cross-modality fusion module to capture the interactions between visual and textual features from the frozen CLIP, and decodes the fused features into segmentation masks. PACL [139 ###reference_b139###] defines a new compatibility function for contrastive loss to align patch tokens of the vision encoder and the [CLS] token of the text encoder. Patch-level alignment can benefit zero-shot transfer to semantic segmentation. CLIPpy [101 ###reference_b101###] endows CLIP with perceptual grouping with a series of modifications on the aggregation method and pre-training strategies. Due to the absence of fine-grained supervisions, such CLIP-based segmentors cannot delineate the fine shape of targets. SAZS [142 ###reference_b142###] alleviates this by developing a boundary-aware constraint.\nSemantic Segmentation Emerges from Text Supervision. Inspired by CLIP, a stream of research attempts to learn transferable semantic segmentation models purely from text supervision. GroupViT [143 ###reference_b143###] and SegCLIP [144 ###reference_b144###] augment vanilla ViTs with grouping modules to progressively group image pixels into segments. To address their granularity inconsistency issue, SGP [145 ###reference_b145###] further mines non-learnable prototypical knowledge [112 ###reference_b112###] as explicit supervision for group tokens to improve clustering results. Unlike these works require customized image encoders, [146 ###reference_b146###] avoids modifying CLIP\u2019s architecture, but improves the optimization by sparsely contrasting on the image-text features with the maximum responses. TagAlign [147 ###reference_b147###] also focuses on the optimization part, and introduces fine-grained attributes as supervision signals to enable dense image-text alignment.\nKnowledge Distillation (KD). KD [148 ###reference_b148###] is a simple but efficient approach to transfer the capability of a foundation model, which has achieved many successes in NLP and CV. In the field of semantic segmentation, ZeroSeg [149 ###reference_b149###] and CLIP-ZSS [150 ###reference_b150###] distill the semantic knowledge from CLIP\u2019s visual encoder to a segmentation model. Moreover, many methods are based on self-distillation to teach themselves by aligning localized dense feature to visual feature of corresponding image patch [151 ###reference_b151###], or learning global semantics based on local information [152 ###reference_b152###]. Moreover, CLIP-DINOiser [153 ###reference_b153###] treats DINO as a teacher to guide CLIP learn DINO-like features that are friendly to segmentation."
100
+ },
101
+ {
102
+ "section_id": "4.1.2",
103
+ "parent_section_id": "4.1",
104
+ "section_name": "4.1.2 DM-based Solution",
105
+ "text": "Beyond the discriminative model CLIP, there has been a growing interest in extending the horizon of generative models like DMs from generation tasks to semantic segmentation.\nFrom the technical perspective, current research can be grouped into the following categories.\nTraining free Semantic Segmentation. Based on the techniques in \u00a73.2 ###reference_###, [55 ###reference_b55###, 106 ###reference_b106###, 107 ###reference_b107###] generate a mask for each candidate class, and assign a category to each pixel by identifying the class with the highest confidence value. FreeSeg-Diff [108 ###reference_b108###] follows a two-stage paradigm, that is, cluster attention maps into class-agnostic masks and then classify each mask by CLIP. These methods are limited by text prompt tokens, requiring an association between each semantic class and a prompt word, which is not always valid. To address this, OVAM [109 ###reference_b109###] introduces an extra attribution prompt to enable the generation of semantic segmentation masks described by an open vocabulary, irrespective of the words in the text prompts used for image generation. Furthermore, OVDiff [111 ###reference_b111###] takes a prototype learning perspective [112 ###reference_b112###, 113 ###reference_b113###] to build a set of categorical prototypes using T2I-DMs, which serve as nearest neighbor classifiers for segmentation. DiffSeg [154 ###reference_b154###] introduces an iterative merging process to merge self-attention maps in SD into valid segmentation masks. Unlike aforementioned methods, FreeDA [54 ###reference_b54###] employs SD to build a large pool of visual prototypes, and the most similar prototype is retrieved for each pixel to yield segmentation prediction.\nDiffusion Features for Semantic Segmentation. Beyond attention maps, the harness of DMs\u2019 latent representations for semantic segmentation is gaining popularity. Works like [63 ###reference_b63###, 155 ###reference_b155###] extract internal embeddings from text-free DMs for segmentation, but they are limited to close-vocabulary settings. In contrast, a majority of methods [156 ###reference_b156###, 157 ###reference_b157###, 158 ###reference_b158###] employs T2I-DMs (mostly SD) to mine semantic representations. LD-ZNet [158 ###reference_b158###] shows that 1) the latent space of LDMs is a better input representation compared to other forms like RGB images for semantic segmentation, and 2) the middle layers (i.e., {6,7,8,9,10}) of the denoising UNet contain more semantic information compared to either the early or later blocks of the encoder (consistent with the observation in [159 ###reference_b159###]). Beyond this, for T2I-DMs, text prompt plays a crucial role in feature extraction as it serves as guidance for semantic synthesis. VPD [156 ###reference_b156###] adopts a straightforward method to use class names in the dataset to form the text context of SD, in which class embedding is extracted from the text encoder of CLIP (with prompt \u201ca photo of [CLS]\u201d). TADP [157 ###reference_b157###] and Vermouth [160 ###reference_b160###] find that automatically generated captions serve as image-aligned text prompt that helps extract more semantically meaningful visual features. In contrast, MetaPrompt [161 ###reference_b161###] integrates SD with a set of learnable emebddings (called meta prompts) to activate task-relevant features within a recurrent feature refinement process. Furthermore, latent features show exceptional generalization performance to unseen domains with proper prompts.\nSemantic Segmentation as Denoising Diffusion. Away from these mainstream battlefields, some works [162 ###reference_b162###, 163 ###reference_b163###, 164 ###reference_b164###, 65 ###reference_b65###] reformulate semantic segmentation as a denoising diffusion process. They learn an iterative denoising process to predict the ground truth map from random noise conditioned on corresponding visual features derived from an image encoder. Based on this insight, SegRefiner [165 ###reference_b165###] considers a discrete diffusion formulation to refine coarse masks derived from existing segmentation models. Moreover, Peekaboo [166 ###reference_b166###] is an interesting approach that treats segmentation as a foreground alpha mask optimization problem which is optimized via SD at inference time. It alpha-blends an input image with random background to generate a composite image, and then takes an inference time optimization method to iteratively update the alpha mask to converge to optimal segmentation with respect to image and text prompts.\nT2I-DMs as Semantic Segmentation Data Synthesizer. Collecting and annotating images with pixel-wise labels is time-consuming and laborious, and thus always a challenge to semantic segmentation. With recent advances in AIGC, many studies [106 ###reference_b106###, 167 ###reference_b167###, 168 ###reference_b168###, 169 ###reference_b169###] explore the potential of T2I-DMs to build large-scale segmentation dataset (including synthetic images and associated mask annotations), which serve as a more cost-effective data source to train any existing semantic segmentation models. The idea has also been adopted in specialist domains like medical image segmentation [170 ###reference_b170###]. Rather than directly generating synthetic masks, some works [171 ###reference_b171###, 172 ###reference_b172###, 173 ###reference_b173###] employ T2I-DMs for data augmentation based on a few labeled images."
106
+ },
107
+ {
108
+ "section_id": "4.1.3",
109
+ "parent_section_id": "4.1",
110
+ "section_name": "4.1.3 DINO-based Solution",
111
+ "text": "Unsupervised Segmentation via Direct Grouping. Given the emergence of segmentation properties in DINO, many methods directly group DINO features into distinct regions via, e.g., k-means [118 ###reference_b118###] or graph partition [114 ###reference_b114###, 174 ###reference_b174###, 175 ###reference_b175###] based on spatially local affinities in Eq. 10 ###reference_###. While being training-free, they are limited in discovering salient objects, and fail to generate masks for multiple semantic regions \u2013 which is critical for semantic segmentation.\nUnsupervised Semantic Segmentation via Self-training.\nFollow-up works investigate self-training approaches to address aforementioned limitation. They tend to train segmentation models on automatically discovered pseudo labels from DINO features. Pseudo labels are in general obtained in a bottom-up manner, but the strategies differ across methods. DeepSpectral [115 ###reference_b115###] performs spectral clustering over dense DINO features to over-cluster each image into segments, and afterwards cluster DINO representations of such segments across images to determine pseudo segmentation labels. Those segments represent object parts that could be combined with over-clustering and community detection to enhance the quality of pseudo masks [176 ###reference_b176###]. COMUS [117 ###reference_b117###] combines unsupervised salient masks with DINO feature clustering to yield initial pseudo masks, which are exploited to train a semantic segmentation network to self-bootstrap the system on images with multiple objects. Notably, STEGO [177 ###reference_b177###] finds that DINO\u2019s features have correlation patterns that are largely consistent with true semantic labels, and accordingly presents a novel contrastive loss to distill unsupervised DINO features into compact semantic clusters. Furthermore, DepthG [178 ###reference_b178###] incorporates spatial information in the form of depth maps into the STEGO training process; HP [179 ###reference_b179###] proposes more effective hidden positive sample to enhance contrastive learning; EAGLE [180 ###reference_b180###] extracts object-level semantic and structural cues from DINO features to guide the model learning object-aware representations."
112
+ },
113
+ {
114
+ "section_id": "4.1.4",
115
+ "parent_section_id": "4.1",
116
+ "section_name": "4.1.4 SAM-based Solution",
117
+ "text": "SAM for Weakly Supervised Semantic Segmentation. While SAM is semantic unawareness, it attains generalized and remarkable segmentation capability, which are widely leveraged to improve segmentation quality in the weakly supervised situations. [181 ###reference_b181###] uses SAM for post-processing of segmentation masks, while [182 ###reference_b182###] leverages SAM for zero-shot inference. S2C [183 ###reference_b183###] incorporates SAM at both feature and logit levels. It performs prototype contrastive learning based on SAM\u2019s segments, and extracts salient points from CAMs for prompting SAM."
118
+ },
119
+ {
120
+ "section_id": "4.1.5",
121
+ "parent_section_id": "4.1",
122
+ "section_name": "4.1.5 Composition of FMs for Semantic Segmentation",
123
+ "text": "FMs are endowed with distinct capabilities stemming from their pre-training objectives. For example, CLIP excels in semantic understanding, while SAM and DINO specialize in spatial understanding. As such, many approaches amalgamate an assembly of these FMs into a cohesive system that absorbs their expertise. Some of them are built under zero guidance [184 ###reference_b184###, 108 ###reference_b108###, 185 ###reference_b185###]. They leverage DINO or SD to identify class-agnostic segments, map them to CLIP\u2019s latent space, and translate the embedding of each segment into a word (i.e., class name) via image captioning models like BLIP. Another example is SAM-CLIP [186 ###reference_b186###] that combines SAM and CLIP into a single model via multi-task distillation. Recently, RIM [187 ###reference_b187###] builds a training-free framework under the collaboration of three VFMs. Concretely, it first constructs category-specific reference features based on SD and SAM, and then matches them with region features derived from SAM and DINO via relation-aware ranking."
124
+ },
125
+ {
126
+ "section_id": "4.2",
127
+ "parent_section_id": "4",
128
+ "section_name": "Instance Segmentation",
129
+ "text": ""
130
+ },
131
+ {
132
+ "section_id": "4.2.1",
133
+ "parent_section_id": "4.2",
134
+ "section_name": "4.2.1 CLIP-based Solution",
135
+ "text": "CLIP as Zero-shot Instance Classifier.\nCLIP plays an important role in achieving open-vocabulary instance segmentation. [188 ###reference_b188###, 189 ###reference_b189###, 190 ###reference_b190###] leverage the frozen CLIP text encoder as a classifier of instance mask proposals. OPSNet [191 ###reference_b191###] utilizes CLIP visual and textual embeddings to enrich instance features, which are subsequently classified by the CLIP text encoder. [192 ###reference_b192###] introduces a generative model to synthesize unseen features from CLIP text embeddings, thereby bridging semantic-visual spaces and address the challenge of lack of unseen training data. [193 ###reference_b193###] presents a dynamic classifier to project CLIP text embedding to image-specific visual prototypes, effectively mitigating bias towards seen categories as well as multi-modal domain gap."
136
+ },
137
+ {
138
+ "section_id": "4.2.2",
139
+ "parent_section_id": "4.2",
140
+ "section_name": "4.2.2 DM-based Solution",
141
+ "text": "T2I-DMs as Instance Segmentation Data Synthesizer. DMs play a crucial role in instance segmentation by facilitating the generation of large-scale training datasets with accurate labels. MosaicFusion [169 ###reference_b169###] introduces a training-free pipeline that simultaneously generates\nsynthetic images via T2I-DMs and corresponding masks through aggregation over cross-attention maps. [194 ###reference_b194###] adopts a cut-and-paste approach for data augmentation, where both foreground objects and background images are generated using DMs. DatasetDM [168 ###reference_b168###] presents a semi-supervised approach that first learns a perception decoder to annotate images based on a small set of labeled data, and then generates images and annotations for various dense prediction tasks."
142
+ },
143
+ {
144
+ "section_id": "4.2.3",
145
+ "parent_section_id": "4.2",
146
+ "section_name": "4.2.3 DINO-based Solution",
147
+ "text": "Unsupervised Instance Segmentation. Some methods [195 ###reference_b195###, 196 ###reference_b196###, 116 ###reference_b116###, 197 ###reference_b197###] attempt to amplify the innate localization abilitiy of DINO to train instance-level segmentation models without any human labels. They typically work in a two-stage discover-and-learn process: discover multiple object masks from DINO features by, e.g., recursively applying normalized cuts [195 ###reference_b195###], and then leverage them as pseudo labels to train instance segmentation models."
148
+ },
149
+ {
150
+ "section_id": "4.2.4",
151
+ "parent_section_id": "4.2",
152
+ "section_name": "4.2.4 Composition of FMs for Instance Segmentation",
153
+ "text": "X-Paste [198 ###reference_b198###] revisits the traditional data boosting strategy, i.e., Copy-Paste, at scale to acquire large-scale object instances with high-quality masks for unlimited categories. It makes full use of FMs to prepare images, i.e., using SD to generate images and using CLIP to filter Web-retrieved images. Instances in the images are extracted with off-the-shelf segmentors, which are composed with background images to create training samples. DiverGen [199 ###reference_b199###] improves X-Paste by focusing more on enhancing category diversity. It leverages SAM to more accurately extract instance masks. Orthogonal to these studies, Zip [200 ###reference_b200###] combines CLIP and SAM to achieve training-free instance segmentation. It observes that clustering on features of CLIP\u2019s middle layer is keenly attuned to object boundaries. Accordingly, it first clusters CLIP features to extract segments, then filters them according to boundary and semantic cues, and finally prompts SAM to produce instance masks.\nMoreover, it is easy to directly turn SAM into an instance segmentation model by feeding bounding boxes of instances as prompts [201 ###reference_b201###, 202 ###reference_b202###], which can be obtained from object detectors, e.g., Faster R-CNN [30 ###reference_b30###], Grounding DINO [203 ###reference_b203###]."
154
+ },
155
+ {
156
+ "section_id": "4.3",
157
+ "parent_section_id": "4",
158
+ "section_name": "Panoptic Segmentation",
159
+ "text": ""
160
+ },
161
+ {
162
+ "section_id": "4.3.1",
163
+ "parent_section_id": "4.3",
164
+ "section_name": "4.3.1 CLIP-based Solution",
165
+ "text": "CLIP as Zero-Shot Mask Classifier.\nMost recent panoptic segmentation approaches [188 ###reference_b188###, 189 ###reference_b189###, 204 ###reference_b204###, 191 ###reference_b191###, 192 ###reference_b192###, 205 ###reference_b205###, 206 ###reference_b206###, 190 ###reference_b190###] follow the query-based mask classification framework introduced in MaskFormer [22 ###reference_b22###] / Mask2Former [23 ###reference_b23###]. They generate class-agnostic mask proposals first and then utilize CLIP to classify the proposals, thereby empowering MaskFormer and Mask2Former open-vocabulary segmentation capabilities. MaskCLIP [188 ###reference_b188###] introduces a set of mask class tokens to extract mask representations more efficiently. MasQCLIP [189 ###reference_b189###] augments MaskCLIP by applying additional projections to mask class tokens to obtain optimal attention weights. OPSNet [191 ###reference_b191###] learns more generalizable mask representations based on CLIP visual encoder that are subsequently used to enhance query embeddings. Unpair-Seg [205 ###reference_b205###] presents a weakly supervised framework that allows the model to benefit from cheaper image-text pairs. It learns a feature adapter to align mask representations with text embeddings, which are extracted from CLIP\u2019s visual and language encoders respectively. Despite the advances, these methods still require training a separate model for each task to achieve the best performance. Freeseg [206 ###reference_b206###] and DaTaSeg [207 ###reference_b207###] design all-in-one models with the same architecture and inference parameters to establish remarkable performance in open-vocabulary semantic, instance, and panoptic segmentation problems. OMG-Seg [208 ###reference_b208###] introduces a unified query representation to unify different task outputs, and is able to handle 10 segmentation tasks across different datasets."
166
+ },
167
+ {
168
+ "section_id": "4.3.2",
169
+ "parent_section_id": "4.3",
170
+ "section_name": "4.3.2 DM-based Solution",
171
+ "text": "Diffusion Features for Panoptic Segmentation. ODISE [209 ###reference_b209###] explores internal representations within T2I DMs to accomplish open-vocabulary panoptic segmentation. It follows the architectural design of Mask2Former but leverages visual features derived from pre-trained diffusion UNet to predict binary mask proposals and associated mask representations. These proposals are finally recognized using CLIP as the zero-shot classifier.\nPanoptic Segmentation as Denoising Diffusion. Pix2Seq- [210 ###reference_b210###] formulates panoptic segmentation as a discrete data generation problem conditioned on pixels, using a Bit Diffusion generative model [211 ###reference_b211###]. DFormer [67 ###reference_b67###] introduces a diffusion-based mask classification scheme that learns to generate mask features and attention masks from noisy mask inputs. Further, LDMSeg [212 ###reference_b212###] solves generative segmentation based on SD by first compressing segmentation labels to compact latent codes and then denoising the latents following the diffusion schedule."
172
+ },
173
+ {
174
+ "section_id": "4.3.3",
175
+ "parent_section_id": "4.3",
176
+ "section_name": "4.3.3 DINO-based Solution",
177
+ "text": "Unsupervised Panoptic Segmentation. Based on the successes of STEGO [177 ###reference_b177###] in semantic segmentation and CutLER [195 ###reference_b195###] in instance segmentation, U2Seg [213 ###reference_b213###] automatically identify \u201cthings\u201d and \u201cstuff\u201d within images to create pseudo labels that are subsequently used to train a panoptic segmentation model, such as Panoptic Cascade Mask R-CNN [214 ###reference_b214###]. Moreover, [215 ###reference_b215###] follows the bottom-up architecture of [216 ###reference_b216###] to separately predict semantic and boundary maps, which are later fused to yield a panoptic segmentation mask."
178
+ },
179
+ {
180
+ "section_id": "4.3.4",
181
+ "parent_section_id": "4.3",
182
+ "section_name": "4.3.4 SAM-based Solution",
183
+ "text": "Towards Semantic-Aware SAM. While SAM shows strong zero-shot performance, its outputs are semantic-agnostic. This drives many research efforts, e.g., Semantic-SAM [217 ###reference_b217###], SEEM [50 ###reference_b50###], to enhance the semantic-awareness of SAM. In addition to visual prompts in SAM for interactive segmentation, these models learn generic object queries to achieve generic segmentation in both semantic and instance levels. In addition, the models are generally trained on a combination of multiple datasets with semantic annotations, such as COCO [218 ###reference_b218###], ADE20K [219 ###reference_b219###], PASCAL VOC [220 ###reference_b220###].\n###figure_5### \u00a75 ###reference_###\u00a75.1 ###reference_###5.1.1 ###reference_.SSS1###\u00a75.2 ###reference_###5.2.1 ###reference_.SSS1###5.2.2 ###reference_.SSS2###5.2.3 ###reference_.SSS3###\u00a75.3 ###reference_###5.3.1 ###reference_.SSS1###5.3.2 ###reference_.SSS2###5.3.3 ###reference_.SSS3###5.3.4 ###reference_.SSS4###5.3.5 ###reference_.SSS5###5.3.6 ###reference_.SSS6###"
184
+ },
185
+ {
186
+ "section_id": "5",
187
+ "parent_section_id": null,
188
+ "section_name": "Foundation Model based PIS",
189
+ "text": "As shown in Fig. 5 ###reference_###, this section reviews FM-based PIS methods."
190
+ },
191
+ {
192
+ "section_id": "5.1",
193
+ "parent_section_id": "5",
194
+ "section_name": "Interactive Segmentation",
195
+ "text": ""
196
+ },
197
+ {
198
+ "section_id": "5.1.1",
199
+ "parent_section_id": "5.1",
200
+ "section_name": "5.1.1 SAM-based Solution",
201
+ "text": "As SAM is born as a universe interactive segmenting system, it naturally becomes the top selection for researchers to build advanced interactive segmentation frameworks.\nMulti-Granularity Interactive Segmentation. Most existing interactive segmentation methods determines a single segmentation mask based on users\u2019 input, which ignores spatial ambiguity. In contrast, SAM introduces a multi-granularity interactive segmentation pipeline, i.e., for each user interaction, desired segmentation region may be the concept of objects with different parts nearby. To improve the segmentation quality, HQ-SAM [201 ###reference_b201###] proposes a lightweight high-quality output token replace the original SAM\u2019s output token. After training on 44K\nhighly-accurate masks, HQ-SAM significantly boosts the mask prediction quality of SAM. Since SAM is class-agnostic, a line of works [221 ###reference_b221###, 222 ###reference_b222###] tunes SAM by aligning the query-segmented regions with corresponding textual representations from CLIP. OVSAM [222 ###reference_b222###] explores dual knowledge transfer between SAM and CLIP to enhance SAM\u2019s recognition capabilities. Semantic SAM [217 ###reference_b217###] designs a SAM-like framework that supports multi-granularity segmentation using the captioned SAM data. Although these multi-granularity interactive segmentation approaches alleviate spatial ambiguity, they result in excessive output\nredundancy and limited scalability. To solve this, GraCo [223 ###reference_b223###] explores granularity-controllable interactive segmentation, which allows precise control of prediction granularity to resolve ambiguity.\nSAM for Medical Image Interactive Segmentation. Interaction segmentation is crucial in the medical field [224 ###reference_b224###], such as for achieving highly precise segmentation of lesion regions, or reducing manual efforts in annotating medical data. Unlike the segmentation of natural images, medical image segmentation poses greater challenges due to many intrinsic issues such as structural complexity, low contrast, or inter-order variability. Recently, several studies [225 ###reference_b225###, 226 ###reference_b226###, 227 ###reference_b227###] explore the zero-shot interactive segmentation capabilities in medical imaging. They cover a diverse range of anatomical and pathological targets across different medical imaging modalities, including CT [228 ###reference_b228###], MRI [229 ###reference_b229###], pathological images [230 ###reference_b230###], endoscopic images [186 ###reference_b186###]. While these studies indicate that SAM performs comparably to state-of-the-art methods in identifying well-defined objects in certain modalities, it struggles or fails completely in more challenging situations, such as when targets have weak boundaries, low contrast, small size, and irregular shapes. This suggests that directly applying SAM without fine-tuning or re-training to previously unseen and challenging medical image segmentation may result in suboptimal performance.\nTo enhance SAM\u2019s performance on medical images, some approaches propose to fine-tune SAM on medical images. MedSAM [97 ###reference_b97###] curates a large scale dataset containing over one million medical image-mask pairs of 11 modalities, which are used for directly fine-tuning SAM. In contrast, other methods explore parameter-efficient fine-tuning strategies. SAMed [231 ###reference_b231###] applies LoRA modules to the pre-trained SAM image encoder. SAMFE [232 ###reference_b232###] finds that applying LoRA to the mask decoder yields superior performance in cases with few exemplars. SAM-Med2D [226 ###reference_b226###] enhances the image encoder by integrating learnable adapter layers. MedSA [233 ###reference_b233###] adapts SAM to volumetric medical images by introducing Space-Depth Transpose where a bifurcated attention mechanism is utilized by capturing spatial correlations in one branch and depth correlations in another. 3DSAM-Adapter [234 ###reference_b234###] introduces a holistic 2D to 3D adaptation method via carefully designed modification of the entire SAM architecture."
202
+ },
203
+ {
204
+ "section_id": "5.2",
205
+ "parent_section_id": "5",
206
+ "section_name": "Referring Segmentation",
207
+ "text": ""
208
+ },
209
+ {
210
+ "section_id": "5.2.1",
211
+ "parent_section_id": "5.2",
212
+ "section_name": "5.2.1 CLIP-based Solution",
213
+ "text": "Referring segmentation aims to segment a referent via a natural linguistic expression. The multi-modal knowledge in CLIP is broadly explored to tackle this multi-modal task.\nTraining-free Referring Segmentation. ZS-RS [235 ###reference_b235###] represents a training-free referring image segmentation method that leverages cross-modal knowledge in CLIP. It begins by generating instance-level masks using an off-the-shelf mask generator, then extracts local-global features of masks and texts from CLIP, and finally identifies the desired mask based on cross-modal feature similarity. TAS [236 ###reference_b236###] employs a similar pipeline as ZS-RS, but computes more fine-grained region-text matching scores to pick the correct mask.\nMulti-modal Knowledge Transfer. Many efforts have been devoted to transfer multi-modal knowledge within CLIP from image-level to pixel-level. A common idea [237 ###reference_b237###, 238 ###reference_b238###, 239 ###reference_b239###, 240 ###reference_b240###, 241 ###reference_b241###, 242 ###reference_b242###, 243 ###reference_b243###, 244 ###reference_b244###, 245 ###reference_b245###] is to introduce a task decoder to fuse CLIP\u2019s image and textual features, and train it with text-to-pixel contrastive learning [237 ###reference_b237###]. In addition to task decoder, ETRIS [238 ###reference_b238###] and RISCLIP [239 ###reference_b239###] integrate a Bridger module to encourage visual-language interactions at each encoder stage. EAVL [241 ###reference_b241###] learns a set of convolution kernels based on input image and language, and do convolutions over the output of task decoder to predict segmentation masks. UniRES [242 ###reference_b242###] explores multi-granularity referring segmentation to unify object-level and part-level grounding tasks. TP-SIS [244 ###reference_b244###] transfers multi-modal knowledge within CLIP for referring surgical instrument segmentation.\nWeakly Supervised Referring Segmentation. Moving towards real-world conditions, some work studies weakly supervised referring segmentation to alleviate the cost on pixel labeling. TSEG [246 ###reference_b246###] computes patch-text similarities with CLIP and guides the classification objective during training with a multi-label patch assignment mechanism. TRIS [247 ###reference_b247###] proposes a two-stage pipeline that extracts coarse pixel-level maps from image-text attention maps, which are subsequently used to train a mask decoder."
214
+ },
215
+ {
216
+ "section_id": "5.2.2",
217
+ "parent_section_id": "5.2",
218
+ "section_name": "5.2.2 DM-based Solution",
219
+ "text": "Training-free Referring Segmentation. Some works [248 ###reference_b248###, 166 ###reference_b166###] find that SD is an implicit referring segmentor with the help of generative process. Peekaboo [166 ###reference_b166###] formulates segmentation as a foreground alpha mask optimization problem with SD, where a fine-grained segmentation map should yield a high-fidelity image generation process. In this way, minimizing the discrepancy between the mask-involved noise and the target noise shall give better textual-aligned pixel representations. Ref-diff [248 ###reference_b248###] first generates a set of object proposals from generative models, and determines the desired mask based on proposal-text similarities.\nDiffusion Features for Referring Segmentation. With the conditioned textual guidance, the modal-intertwined attention maps (c.f. \u00a73.2 ###reference_###) could intuitively serve as an initial visual dense representation, which could be used to yield the final segmentation mask. VPD [156 ###reference_b156###] introduces a task-specific decoder to process encoded features fused from cross-attention maps and multi-level feature maps in U-Net. Meanwhile, LD-ZNet [158 ###reference_b158###] injects attention features into a mask decoder for generating better textual-aligned pixel-level masks. Apart from the attention-based utilization, [249 ###reference_b249###, 250 ###reference_b250###] directly feed side outputs from each intermediate layer of the diffusion U-Net as well as the textual embedding, to a mask decoder to yield final predictions."
220
+ },
221
+ {
222
+ "section_id": "5.2.3",
223
+ "parent_section_id": "5.2",
224
+ "section_name": "5.2.3 LLMs/MLLMs-based Solution",
225
+ "text": "The success of LLMs/MLLMs has showcased incredible reasoning ability and can answer complex questions, thereby bringing new possibilities to achieve new pixel reasoning and understanding capabilities. In particularly, LISA [59 ###reference_b59###] studies a new segmentation task, called reasoning segmentation. Different from traditional referring segmentation, the segmentors in this setting are developed to segment the object based on implicit query text involving complex reasoning. Notably, the query text is not limited to a straightforward reference (e.g., \u201cthe front-runner\u201d), but a a more complicated description involving complex reasoning\nor world knowledge (e.g., \u201cwho will win the race?\u201d). LISA employs LLaVA [251 ###reference_b251###] to output a text response based on the input image, text query, and a [seg] token. The embedding for the customized [seg] token is decoded into the segmentation mask via SAM decoder. Afterwards, LISA++ [252 ###reference_b252###] promotes LISA to differentiate individuals within the same category and enables more natural conversation in multi-turn dialogue. Based on these works, many efforts have been devoted to promote the reasoning capability and segmentation accuracy. LLM-Seg [253 ###reference_b253###] proposes using SAM to generate a group of mask proposals that selects the best-suited answer as the final segmentation prediction. Next-Chat [254 ###reference_b254###] adds a [trigger] token that depicts the coordinate of the object box as a supplementary input for MLLM to help generate better masks. Similarly, GSVA [255 ###reference_b255###] introduces a rejection token [rej] to relieve the empty-target case where the object referred to in the instructions does not exist in the image, leading to the false-positive prediction. Except for the functional token incorporation, [256 ###reference_b256###, 257 ###reference_b257###] propose using diverse textual descriptions, such as object attribute and part, to enhance the object-text connection for accurate reasoning results. Regarding reasoning costing, PixelLLM [60 ###reference_b60###] introduces a lightweight decoder to reduce the computational cost in the reasoning process. Osprey [258 ###reference_b258###] extends MLLMs by incorporating fine-grained mask regions into language instruction, and delivers remarkable pixel-wise visual understanding capabilities."
226
+ },
227
+ {
228
+ "section_id": "5.2.4",
229
+ "parent_section_id": "5.2",
230
+ "section_name": "5.2.4 Composition of FMs for Referring Segmentation",
231
+ "text": "To enhance the textual representation for pixel-level understanding, some methods use LLMs as the text encoder for obtaining improved textual embedding for modal fusion. Particularly, BERT [259 ###reference_b259###], due to its simplicity and practicality, is nearly the top choice among works [260 ###reference_b260###, 261 ###reference_b261###, 262 ###reference_b262###, 246 ###reference_b246###, 263 ###reference_b263###, 264 ###reference_b264###, 265 ###reference_b265###, 266 ###reference_b266###, 267 ###reference_b267###, 268 ###reference_b268###, 269 ###reference_b269###, 270 ###reference_b270###]. Most of them design a fusion module to bridge the features between the visual encoder and BERT. In addition, some works [254 ###reference_b254###, 271 ###reference_b271###, 272 ###reference_b272###] treat LLM as a multi-modal unified handler, and use Vicuna [273 ###reference_b273###] to map both image and text into a unified feature space, thereafter generating the segmentation output. With the powerful dialogue capabilities of the GPT-series models [39 ###reference_b39###], some works [274 ###reference_b274###, 275 ###reference_b275###, 276 ###reference_b276###] employ ChatGPT to rewrite descriptions with richer semantics, and encourages finer-grained image-text interaction in referring segmentation model training.\nApart from textual enhancement using LLMs, SAM [49 ###reference_b49###] is widely chosen to provide rich segmentation prior for referring segmentation. [277 ###reference_b277###] presents a prompt-driven framework to bridge CLIP and SAM in an end-to-end manner through prompting mechanisms. [278 ###reference_b278###] focuses on building referring segmentors based on a simple yet effective bi-encoder design, i.e., adopting SAM and a LLM to encode image and text patterns, respectively, and then fuse the multi-modal features for segmentation predictions. Such a combination of SAM and LLM, without bells and whistles, could be easily extended to the MLLM case. Therefore, [279 ###reference_b279###, 280 ###reference_b280###] propose to incorporate CLIP with SAM to improve the multi-modal fusion. Specifically, F-LMM [279 ###reference_b279###] proposes to use CLIP to encode the visual features, which are decoded by SAM to the predicted segmentation map. PPT [280 ###reference_b280###] first employs attention maps of CLIP to compute the peak region as the explicit point prompts, which are directly used to segment the query target."
232
+ },
233
+ {
234
+ "section_id": "5.3",
235
+ "parent_section_id": "5",
236
+ "section_name": "Few-Shot Segmentation",
237
+ "text": ""
238
+ },
239
+ {
240
+ "section_id": "5.3.1",
241
+ "parent_section_id": "5.3",
242
+ "section_name": "5.3.1 CLIP-based Solution",
243
+ "text": "CLIP Features for Few-Shot Segmentation. Adopting CLIP to extract effective visual correlation from the support images to help segmentation inference of the query image has formulated a prevailing pipeline to address FSS, which shall be categorized into two streams based on the usage of CLIP-oriented visual feature. The first class [281 ###reference_b281###, 282 ###reference_b282###, 283 ###reference_b283###, 284 ###reference_b284###, 285 ###reference_b285###, 286 ###reference_b286###] relies on modelling the feature relationship of support-query images to explicitly segment the query image. WinCLIP [281 ###reference_b281###] aggregates the multi-scale CLIP-based visual features of the reference and query images to obtain an enhanced support-query correlation score map for pixel-level prediction. [282 ###reference_b282###, 283 ###reference_b283###, 284 ###reference_b284###, 285 ###reference_b285###] further refine the score maps with the query- and support-based self-attention maps. [286 ###reference_b286###] introduces the foreground-background correlation from the support images by crafting proper textual prompts. Another line of works [287 ###reference_b287###, 243 ###reference_b243###, 288 ###reference_b288###] focuses on segmenting the query image regulated by the support-image-generated prototypes, where some metric functions, e.g., cosine similarity, shall be involved for the query-prototype distance calculation. RD-FSS [287 ###reference_b287###] proposes to leverage the class description from CLIP text encoder as the textual prototypes, which are then correlated with visual features to dense prediction in a cross-attention manner. Additionally, PartSeg [288 ###reference_b288###] aggregates both the visual and textual prototypes to help generate the improved query image pixel-level representation. Here the visual prototypes are obtained through correspondingly pooling the CLIP-based visual feature by the reference segmentation masks. To further enhance the prototypical representation, [243 ###reference_b243###] use CLIP to generate the visual prototypes from the masked support images, where only interested object is remained."
244
+ },
245
+ {
246
+ "section_id": "5.3.2",
247
+ "parent_section_id": "5.3",
248
+ "section_name": "5.3.2 DM-based Solution",
249
+ "text": "Diffusion Features for Few-Shot Segmentation.\nThe internal representations of DMs are useful for few-shot segmentation. Specifically, [289 ###reference_b289###] directly leverages the latent diffusion features at specific time step as the representations of the support image, which are decoded along with the original image via a mask decoder. On the contrary, DifFSS [290 ###reference_b290###] proposes to synthesize more support-style image-mask pairs using DMs. Building on the invariant mask, the generated support images shall include same mask-covered object yet with diverse background, enriching the support patterns for better query segmentation.\nFew-Shot Segmentation as Denoising Diffusion. Some studies [291 ###reference_b291###, 292 ###reference_b292###] tackle few-shot segmentation by solving a denoising diffusion process. They fine-tune SD to explicitly generate segmentation mask for query images, with the main difference being the condition applied during the fine-tuning. MaskDiff [291 ###reference_b291###] uses query image and support masked images as the condition, while SegICL [292 ###reference_b292###] merely employs the support/query mask as the condition."
250
+ },
251
+ {
252
+ "section_id": "5.3.3",
253
+ "parent_section_id": "5.3",
254
+ "section_name": "5.3.3 DINO-based Solution",
255
+ "text": "DINO Features for Few-Shot Segmentation. There are some efforts [293 ###reference_b293###, 294 ###reference_b294###, 295 ###reference_b295###, 296 ###reference_b296###] exploiting latent representations in DINO/DINOv2 to enhance query and support features. [293 ###reference_b293###] directly uses DINOv2 to encode query and support images, and shows that DINOv2 outperforms other FMs, like SAM and CLIP. Based on this, SPINO [294 ###reference_b294###] employs DINOv2 for few-shot panoptic segmentation. [295 ###reference_b295###, 296 ###reference_b296###] further mine out query-support correlations through the cross- and self-attention of token embeddings in DINO, leading to more support-aware segmentation."
256
+ },
257
+ {
258
+ "section_id": "5.3.4",
259
+ "parent_section_id": "5.3",
260
+ "section_name": "5.3.4 SAM-based Solution",
261
+ "text": "Prompt Generation for SAM. Given the provided support image sets, a line of works [297 ###reference_b297###, 298 ###reference_b298###, 299 ###reference_b299###, 300 ###reference_b300###, 301 ###reference_b301###] focuses on generating proper prompts for SAM to segment the desired target in the query image. Notably, a majority of them [297 ###reference_b297###, 298 ###reference_b298###, 299 ###reference_b299###] propose to generate a group of candidate points as prompts based on the support-query image-level correspondence/similarity, where the support mask, highlighting the query object\u2019s semantic, is then used to select the object-oriented prompts. VRP-SAM [300 ###reference_b300###] learns a set of visual reference prompts based on query-support correspondence, which are fed into a frozen SAM for segmentation. APSeg [301 ###reference_b301###] extends VRP-SAM by exploring multiple support embeddings to generate more meaningful prompts for SAM."
262
+ },
263
+ {
264
+ "section_id": "5.3.5",
265
+ "parent_section_id": "5.3",
266
+ "section_name": "5.3.5 LLM/MLLM-based Solution",
267
+ "text": "There are several trials [302 ###reference_b302###, 303 ###reference_b303###] in adopting LLM/MLLM to address FSS through instruction design. LLaFS [302 ###reference_b302###] maps the fused support-query pattern into the language space, and let a LLM to tell the coordinate description of the desired segmentation mask.\n[303 ###reference_b303###] uses GPT-4 as the task planner to divide FSS into a sequence of sub-tasks based on the support set, subsequently calls vision tools such as SAM and GPT4Vision to predict segmentation masks."
268
+ },
269
+ {
270
+ "section_id": "5.3.6",
271
+ "parent_section_id": "5.3",
272
+ "section_name": "5.3.6 In-Context Segmentation",
273
+ "text": "The rapid progress of LLMs leads to an emerging ability to learn in-context from just a few examples [38 ###reference_b38###, 45 ###reference_b45###]. Inspired by this, researchers are exploring a related concept in computer vision called in-context segmentation (ICS). From the perspective of segmenting a query image based on a few supports, ICS can be seen as a sub-task of FSS, but it functions directly on pre-trained models without any task-specific finetuning. Most ICL-emerged LLMs are generative models trained through masked language modeling or next token prediction strategies, leading to efforts in ICS that mimic these self-supervised methods. Pioneering work like VPInpainting [304 ###reference_b304###] approaches visual in-context learning as image inpainting. It defines visual prompt as a grid-like single image containing an input-output example(s) and a query, then trains an inpainting model (via MAE [305 ###reference_b305###]) to predict the missing parts of an image such that it is consistent with given example(s). With this basis, [306 ###reference_b306###, 307 ###reference_b307###, 308 ###reference_b308###] propose to retrieve optimal examples from large datasets as the prompt. Additionally, Painter [309 ###reference_b309###] and SegGPT [51 ###reference_b51###] are vision generalists built on in-context learning. They unify various vision tasks within the in-context learning framework by standardizing outputs of core vision tasks. Some other studies [310 ###reference_b310###, 311 ###reference_b311###] focus on creating large vision models by formatting images, like language tokenizer, to a group of sequence as visual sentences, and then perform LLM-like training via next token prediction. Notably, developing these visual autoregressive models requires vast amounts of diverse vision data from varied tasks, e.g., image segmentation, depth estimation. PromptDiffusion [312 ###reference_b312###] explores in-context learning for diffusion models by fine-tuning SD to generate the query mask conditioned on the support image-mask pair and the query image. Matcher [313 ###reference_b313###] utilizes DINOv2 to locate the target in query images by bidirectional matching, and leverages the coarse location information as the prompts of SAM for segmentation. Tyche [314 ###reference_b314###] introduces a probabilistic approach to ICS by explicitly modeling training and testing uncertainty, and shows significant potential in medical image segmentation."
274
+ },
275
+ {
276
+ "section_id": "6",
277
+ "parent_section_id": null,
278
+ "section_name": "Open Issue and Future Direction",
279
+ "text": "Based on the reviewed research, the field of image segmentation has made tremendous progress in the FM era. Nonetheless, given the high diversity and complexity of segmentation tasks, coupled with the rapid evolution of FMs, several critical directions warrant ongoing exploration.\nExplaining the Emergence of Segmentation Knowledge in FMs. Despite that different FMs vary significantly in architectures, data and training objectives, we observe a consistent emergence of segmentation knowledge from them, which drives the development of impressive training-free segmentation models. However, current methods do not fully explain how these FMs learn to understand pixels, especially how pixels interact with other modalities, like texts in CLIP and Text-to-Image Diffusion Models. This calls for novel explainable techniques to enhance our understanding of pixels in FMs. This is crucial to minimize the negative societal impacts in existing FMs, and will broaden more applications of FMs in diverse visual domains and tasks.\nIn-Context Segmentation. Motivated by the success of in-context learning in the language domain, there has been a growing interest in exploring its potential for vision tasks, such as image segmentation. However, the variability in output representations across vision tasks \u2013 such as the differing formats required for semantic, instance, and panoptic segmentation \u2013 renders ICS a particularly challenging problem. While some progress have been made, current results don\u2019t show as high performance as bespoke, especially in difficult tasks like panoptic segmentation. Additionally, the ability to perform segmentation at arbitrary levels of granularity through in-context learning remains an unexplored area. Last, the scale of models employed in ICS is considerably smaller compared to the NLP counterparts like GPT-3, which may be a key factor limiting the performance of ICS. To achieve a breakthrough akin to GPT-3 in the vision domain, it is essential to develop large vision models [310 ###reference_b310###]. This task poses significant difficulties and will require extensive collaboration within the vision community to address issues related to data, architecture, and training techniques.\nMitigating Object Hallucination in MLLMs-based Models. Although MLLMs have demonstrated significant success in pixel understanding (c.f. \u00a75.2.3 ###reference_.SSS3###), they are prone to the issue of object hallucination [315 ###reference_b315###] as LLMs. Here object hallucination refers that a model generates unintended descriptions or captions that contain objects which are inconsistent with or even absent from the target image. This issue greatly undermines the reliability of these models in real-world applications. Hence, we advocate for future research in MLLMs-based segmentation to rigorously assess object hallucinations for their models, and to incorporate this issue consideration in the development of segmentation models.\nPowerful and Scalable Data Engine. Segmentation data are catalysts for progress in image segmentation. Much of the current success in deep learning based image segmentation owes to datasets such as PASCAL VOC [220 ###reference_b220###], COCO [218 ###reference_b218###], Cityscapes [316 ###reference_b316###], and ADE20K [219 ###reference_b219###]. Nonetheless, scaling up image data is a long-standing challenge and is becoming increasingly critical in the FM era, which calls for a powerful and scalable segmentation data engine. Recently, SAM [49 ###reference_b49###] tackles this issue with a data engine that labels images via \u201cmodel-in-the-loop\u201d, yielding SA-1B with 11M images and 1B masks. Nevertheless, the engine is limited in realistic image labeling and lacks semantic awareness. A promising direction is to incorporate generative models into the system, which would create a more powerful data engine that can scale to arbitrary levels and is more favorable to data-scarcity scenarios like medical imaging [317 ###reference_b317###] and satellite imagery [318 ###reference_b318###].\nDiffusion Models as the New Data Source. Text-to-image diffusion models have been proved feasible to build segmentation datasets by generating pairs of synthetic images and corresponding segmentation masks. However, there exists many challenges. First, existing DMs like SD have difficulties in generating complex scenes, e.g., a crowded street with hundreds of objects, closely intertwined objects. To alleviate this, layout or box conditions, instead of solely text, should be provided to guide the generation. Second, the bias in LAION-5B on which SD was trained, might be transferred to the dataset. This issue can be alleviated by absorbing the advancements in addressing the bias problem in generative models. Third, the domain gap between synthetic and real datasets should be continuously studied. Fourth, current approaches are limited in generating data for the task of semantic segmentation and a limited number of semantic categories, how to generalize them to generate instance-level segmentation masks and scale up the semantic vocabulary are unsolved.\nEfficient Image Segmentation Model. While FM-driven segmentation models exhibit remarkable performance, the majority of methods introduce significant computational overheads, such as heavy image encoders for feature computation and costly fine-tuning processes. These challenges impede the broader applicability and affordability of the models in practical scenarios. Key techniques to be explored include knowledge distillation, model compression, and parameter-efficient tuning. Most existing studies focus on improving the deployment efficiency solely for SAM; yet, attention to other FMs is equally vital."
280
+ },
281
+ {
282
+ "section_id": "7",
283
+ "parent_section_id": null,
284
+ "section_name": "Conclusion",
285
+ "text": "In this survey, we provide the first comprehensive review to recent progress of image segmentation in the foundation model era. We introduce key concepts and examine the inherent segmentation knowledge in existing FMs such as CLIP, Diffusion Models, SAM and DINO/DINOv2. Moreover, we summarize more than 300 image segmentation models for tackling generic and promptable image segmentation tasks. Finally, we highlight existing research gaps that need to be filled and illuminate promising avenues for future research. We hope that this survey will act as a catalyst, sparking future curiosity and fostering a sustained passion for exploring the potential of FMs in image segmentation."
286
+ }
287
+ ],
288
+ "appendix": [],
289
+ "tables": {},
290
+ "image_paths": {
291
+ "1": {
292
+ "figure_path": "2408.12957v3_figure_1.png",
293
+ "caption": "Figure 1: Image segmentation tasks reviewed in this survey. Generic image segmentation: (a) semantic segmentation, (b) instance segmentation, (c) panoptic segmentation; Promptable image segmentation: (d) interactive segmentation, (e) referring segmentation, (f) few-shot segmentation.",
294
+ "url": "http://arxiv.org/html/2408.12957v3/x1.png"
295
+ },
296
+ "2": {
297
+ "figure_path": "2408.12957v3_figure_2.png",
298
+ "caption": "Figure 2: Overview of this survey.",
299
+ "url": "http://arxiv.org/html/2408.12957v3/x2.png"
300
+ },
301
+ "3": {
302
+ "figure_path": "2408.12957v3_figure_3.png",
303
+ "caption": "Figure 3: (a) Illustrations of how segmentation derived from FMs. Briefly speaking, Modifying CLIP\u2019s attention pooling to location-aware attentions can obtain segmentation features. Merging cross-attention maps and self-attention maps in DMs can produce precise semantic segments. DINO naturally contains segmentation properties in the last attention maps of the class token. (b) shows some visualization examples.",
304
+ "url": "http://arxiv.org/html/2408.12957v3/x3.png"
305
+ },
306
+ "4": {
307
+ "figure_path": "2408.12957v3_figure_4.png",
308
+ "caption": "Figure 4: Overview of Foundation Model based GIS (\u00a74).",
309
+ "url": "http://arxiv.org/html/2408.12957v3/extracted/6028618/figure/4-1-gis.png"
310
+ },
311
+ "5": {
312
+ "figure_path": "2408.12957v3_figure_5.png",
313
+ "caption": "Figure 5: Overview of Foundation Model based PIS (\u00a75).",
314
+ "url": "http://arxiv.org/html/2408.12957v3/extracted/6028618/figure/4-2-pis.png"
315
+ }
316
+ },
317
+ "validation": true,
318
+ "references": [],
319
+ "url": "http://arxiv.org/html/2408.12957v3"
320
+ }
20241127/2408.14776v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20241127/2408.17175v3.json ADDED
@@ -0,0 +1,622 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model",
3
+ "abstract": "Recent advancements in audio generation have been significantly propelled by the capabilities of Large Language Models (LLMs). The existing research on audio LLM has primarily focused on enhancing the architecture and scale of audio language models, as well as leveraging larger datasets, and generally, acoustic codecs, such as EnCodec, are used for audio tokenization. However, these codecs were originally designed for audio compression, which may lead to suboptimal performance in the context of audio LLM. Our research aims to address the shortcomings of current audio LLM codecs, particularly their challenges in maintaining semantic integrity in generated audio. For instance, existing methods like VALL-E, which condition acoustic token generation on text transcriptions, often suffer from content inaccuracies and elevated word error rates (WER) due to semantic misinterpretations of acoustic tokens, resulting in word skipping and errors. To overcome these issues, we propose a straightforward yet effective approach called X-Codec. X-Codec incorporates semantic features from a pre-trained semantic encoder before the Residual Vector Quantization (RVQ) stage and introduces a semantic reconstruction loss after RVQ. By enhancing the semantic ability of the codec, X-Codec significantly reduces WER in speech synthesis tasks and extends these benefits to non-speech applications, including music and sound generation. Our experiments in text-to-speech, music continuation, and text-to-sound tasks demonstrate that integrating semantic information substantially improves the overall performance of language models in audio generation.\nOur code and demo are available\n111\n\nDemo: https://x-codec-audio.github.io\nCode: https://github.com/zhenye234/xcodec",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In recent years, Large Language Models (LLMs) such as GPT [1 ###reference_b1###] have demonstrated remarkable capabilities in modeling complex, high-dimensional data across various domains, including text and image generation [2 ###reference_b2###, 3 ###reference_b3###]. Inspired by these successes, there has been significant interest [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###] in exploring the application of LLMs to audio generation.\nAudio codecs [8 ###reference_b8###] have emerged as a critical technique for audio LLMs, bridging the gap between continuous audio waveforms and token-based language models. By discretizing high-rate audio signals into a finite set of tokens, these codecs enable the application of LLM architectures to audio data, leveraging the successes of textual LLMs.\nHowever, prior research on audio codecs has primarily focused on achieving lower compression rates and higher reconstruction quality [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]. Meanwhile, many efforts in audio generation have concentrated on enhancing model architecture, scaling, or leveraging larger datasets. For instance, AudioLM [5 ###reference_b5###] adopts a two-stage pipeline that models the acoustic token in an autoregressive way conditioned on the semantic token. VALL-E [6 ###reference_b6###], the first TTS framework to leverage large, diverse, and multi-speaker speech data, demonstrates strong in-context learning capabilities similar to GPT-3, treating TTS as a language modeling task on audio codecs. MusicGen [12 ###reference_b12###] generates music using a single-stage transformer LM alongside efficient token interleaving patterns. Similarly, UniAudio [7 ###reference_b7###] scaled up to 165K hours of audio and 1B parameters, utilizing LLM techniques to generate tokens for various types of audio, including speech, sounds, music, and singing, given different input conditions.\nWhile these works have shown success in developing audio language models, they all rely on the acoustic codecs such as Encodec [10 ###reference_b10###] or Soundstream [8 ###reference_b8###] for audio tokenization and de-tokenization. However, these acoustic codecs were originally designed for audio compression rather than for audio language models. This misalignment means the design may not be optimal for audio language modeling.\nTo design a better audio codec for Audio LLMs, we drew inspiration from the initial purpose of LLMs such as GPT, which were designed to process text. These models focus on understanding and generating natural language, which is inherently rich in semantics. Motivated by this, we assume that a better audio tokenizer should encapsulate rich semantic information to facilitate an easy understanding of audio content, thus reducing the language model\u2019s burden in interpreting tokens. However, most audio codecs focus on acoustic reconstruction which ignores the semantic information. As a result, LLM essentially tries to predict the local fluctuations of the audio signal, which is difficult, and methods like VALL-E, which condition acoustic token generation on text transcriptions, frequently result in content inaccuracies causing elevated word error rates (WER), stemming from the semantic misinterpretations of acoustic tokens, leading to word skipping and errors.\nTo address this issue, approaches like SpeechTokenizer [13 ###reference_b13###] have attempted to disentangle speech into separate tokens for content and timbre and perform distillation-based semantic and acoustic integration. However, this method may not integrate smoothly with all audio LLMs, especially those requiring uniform token treatment across different layers, such as utilizing flattened codec tokens [7 ###reference_b7###, 12 ###reference_b12###].\nIn this paper, We propose a straightforward yet effective method termed \u201cX-codec\u201d, which integrates both semantic and acoustic features into a unified tokenization framework. The X-Codec architecture employs a distinctive \u201cX-shaped\u201d structure, characterized by two inputs and two outputs, unifying semantic and acoustic information within a single Residual Vector Quantizer (RVQ) structure. This design enables simultaneous embedding learning of semantic richness and acoustic fidelity for every token, resulting in better performance for audio LLM.\nWe have conducted comprehensive evaluations of X-Codec across various applications, including text-to-speech, music continuation, and text-to-sound synthesis. The results consistently demonstrate the effectiveness of the proposed method. Furthermore, our comparative evaluation on VALL-E based TTS demonstrates that X-Codec outperforms existing disentanglement techniques, thereby highlighting its efficacy and versatility in advancing audio LLM technologies."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Works",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Audio Language Model",
21
+ "text": "The success of Large Language Models (LLMs) has sparked a significant trend in leveraging language foundation models for audio generation tasks [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 7 ###reference_b7###, 18 ###reference_b18###]. Audio, much like language, consists of variable-length sequences, making it well-suited for modeling with language foundation models. One pioneering method, AudioLM [5 ###reference_b5###], employs a multi-stage strategy to harness the predictive capabilities of foundation models for generating tokens unconditionally. This approach involves predicting semantic tokens from various conditions (e.g., phonemes, text descriptions, MIDI) in the initial stage, followed by transforming them into acoustic tokens through coarse-to-fine modeling, ultimately generating the waveform. Representative systems such as SPEAR-TTS [19 ###reference_b19###] for speech synthesis and MusicLM [4 ###reference_b4###] for music generation have also been proposed. However, the two-stage process can lead to complexity in training and suboptimal performance due to the separate development of semantic and acoustic tokens, leading to error accumulation.\nConversely, recent advancements have shown that methods employing a single-stage language model outperform two-stage approaches. For example, VALL-E [6 ###reference_b6###] utilizes an autoregressive (AR) model to predict the first token and a non-autoregressive (NAR) model to estimate the residual tokens, demonstrating superior performance compared to AudioLM. Similarly, MusicGen [12 ###reference_b12###] employs a single-stage transformer language model and incorporates a delay pattern strategy for efficient token interleaving, achieving better results than MusicLM. Other notable works include CLAM-TTS [20 ###reference_b20###], VoiceCraft [21 ###reference_b21###], and UniAudio [7 ###reference_b7###].\nDespite recent advancements, directly modeling the intricate low-level acoustic fluctuations with an LLM poses challenges. LLMs are primarily designed for processing natural language, which is inherently rich in semantics. In order to overcome this limitation, we propose X-Codec, a novel enhancement that aims to enrich semantic processing within acoustic codecs. By doing so, we aim to improve the overall performance of audio LLMs.\n###figure_1###"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Audio Codec",
27
+ "text": "Recent advancements have seen a surge in deep learning methodologies employing vector quantization [22 ###reference_b22###] to reconstruct continuous signals into discrete representations for AR generation. Notably, audio codecs based on the VQ-GAN framework [23 ###reference_b23###] have gained prominence. For example, SoundStream [8 ###reference_b8###] introduces a versatile codec adaptable to various audio types, integrating Residual Vector Quantization (RVQ) and Generative Adversarial Network (GAN) to refine quantization and reconstruction. Similarly, Encodec [10 ###reference_b10###] enhances compression through a multi-scale discriminator and a loss-balancing strategy alongside a language model. HiFi-Codec [11 ###reference_b11###] employs Group-Residual Vector Quantization (GRVQ) to minimize the need for extensive codebooks while maintaining high reconstruction fidelity. DAC [9 ###reference_b9###] addresses codebook collapse, where some codes remain unused, by applying improved codebook learning to achieve higher compression rates.\nThese codecs primarily focus on acoustic reconstruction and higher compression rates, often overlooking their potential as tokenizers for audio LLMs. Some attempts have been made to develop more suitable tokenizers for audio LLMs. For example, SpeechTokenizer [13 ###reference_b13###] utilizes HuBERT to separate speech into distinct VQ components for content and timbre/acoustic details. This separation improves the modeling of content in the AR stage of VALL-E, while the NAR stage enriches the acoustic details. However, a distillation framework is exploited, this makes SpeechTokenizer may not be compatible with all LLM architectures, especially those that require uniform treatment of tokens, such as methods using flattened codec tokens [7 ###reference_b7###, 12 ###reference_b12###]. Another attempt is presented by SemantiCodec [24 ###reference_b24###], which employs a pre-trained AudioMAE [25 ###reference_b25###] to generate distinct semantic and acoustic tokens from mel-spectrograms. However, this method inherits the issues of SpeechTokenizer and introduces additional complexity in token modeling. Moreover, since the audioMAE is performed on 2D time-frequency mel-spectrograms, LLMs must effectively handle dual scales (time and frequency), which may require significant modifications to existing LLM structures.\nIn contrast, our proposed X-Codec provides a uniform and comprehensive enhancement of semantic information for all tokens, resulting in significant performance improvements for existing audio LLMs without requiring any structural modifications."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Methods",
33
+ "text": "In this section, we propose X-codec, a straightforward yet effective method to overcome the semantic shortcomings of the current acoustic codecs."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Acoustic Audio codec",
39
+ "text": "As illustrated in Figure 1 ###reference_###, our model builds upon the framework established by existing acoustic codecs such as Encodec [10 ###reference_b10###] and DAC[9 ###reference_b9###]. An acoustic audio codec is composed of three main components: an acoustic encoder, a quantizer, and an acoustic decoder. The input of the codec is the raw waveform , where represents the number of waveform samples. This waveform is fed into the acoustic encoder, which consists of several convolutional layers and employs temporal downscaling to extract frame-level latent acoustic features , where denotes the hidden size of the acoustic features and is the number of frames. These continuous features are then transformed into a series of discrete tokens using a Residual Vector Quantizer (RVQ) with quantizer layers. During training, a specific codebook for the quantizer is learned, enabling the conversion of discrete tokens back to continuous features . The acoustic decoder then reconstructs the waveform from using several convolutional layers and temporal upsampling. The training process is supervised using various losses, including mel loss, STFT loss, and GAN loss, to ensure high-quality acoustic reconstruction."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Analysing Semantic Shortcoming",
45
+ "text": "In this section, we investigate the impact of acoustic codecs on the performance of audio LLMs, focusing specifically on VALL-E, a pioneering model that leverages language model principles for text-to-speech. Our analysis reveals that training VALL-E using Encodec results in high word error rates (WER) and frequent inaccuracies in content generation. For example, when the input text \u201che passed through Henley Saint Albans and came so near to London as Harrow on the Hill\u201d is synthesized, it is erroneously produced as \u201che passed through henley saint albeans and camsel knew to lunglan as herold the lor\u201d. This misinterpretation, which is beyond simply improving the audio quality, suggests a fundamental limitation in Encodec\u2019s ability to differentiate phonemes, possibly due to its inadequate semantic processing capabilities.\nTo substantiate the above hypothesis, we conducted Phonetic Discriminability ABX Tests to evaluate the phonetic discriminability of Encodec\u2019s representations. The details are provided in the experiment section. Our findings reveal that Encodec\u2019s representations exhibit poor phonetic discriminability, which confirms the presence of semantic inadequacies in the codec. Based on these results, we assert that these semantic shortcomings are a significant contributing factor to the observed inaccuracies of language model based audio generation.\nTo effectively address these semantic limitations, we introduce a novel approach that integrates more comprehensive semantic features into the codec\u2019s architecture. This enhancement is designed to enrich the codec\u2019s understanding of audio content, thereby alleviating the interpreting load on the language model. Detailed elaboration of this method is provided in the subsequent section."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Designing Auxiliary Semantic Module",
51
+ "text": "Our approach employs a straightforward method that enhances audio codecs by directly concatenating semantic and acoustic features. Initially, we extract the semantic feature vector from the audio waveform x. This extraction utilizes a self-supervised, pre-trained model such as HuBERT [26 ###reference_b26###] or wav2vec 2.0 [27 ###reference_b27###]. The extracted features are then processed through multiple convolutional layers within a semantic encoder to yield the refined semantic feature vector S. Concurrently, the acoustic branch produces the feature A. These outputs, S and A, are subsequently concatenated using a linear projection , formulated as:\nwhere the concatenated feature is designed to maximize information preservation from both semantic and acoustic sources. This combined feature is then subject to RVQ using an -layer quantizer, resulting in tokens that encapsulate a rich mixture of semantic and acoustic information.\nThe quantized feature is designed to meet the decoder\u2019s objectives through two projectors, and , which enable the decoders to reconstruct the original semantic feature and the audio waveform . We adhere to established acoustic reconstruction methods from previous works while introducing a Mean Squared Error (MSE) loss specifically for the reconstruction of semantic features. Furthermore, a constant weight is applied to the semantic loss to ensure that its scale is aligned with other losses, thus promoting a balanced training objective."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments",
57
+ "text": "Given that established audio codecs such as Encodec, Speechtokenizer, and DAC are trained on diverse datasets with varying configurations, we meticulously design experiments to rigorously evaluate the efficacy of our proposed solution, X-Codec. To ensure a fair and unbiased comparison, each experiment employs a baseline acoustic codec that is precisely aligned with our X-Codec in terms of training data, training steps, and other hyperparameters. The primary distinction between the baseline codec and X-Codec lies in the exclusion of the auxiliary semantic module in the baseline configuration. This controlled experimental design enables us to isolate and evaluate the specific contributions of our semantic enhancements to the overall performance of the audio LLMs."
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Text-to-Speech",
63
+ "text": "In this subsection, we critically evaluate the performance of various audio codecs in training the VALL-E model for zero-shot Text-to-Speech (TTS) tasks. Our investigation is guided by two primary objectives:\nTo determine whether the X-Codec can enhance the performance of audio LLMs in TTS applications.\nTo evaluate the comparative advantages of X-Codec over the disentanglement strategy employed by SpeechTokenizer, specifically within the context of the VALL-E model."
64
+ },
65
+ {
66
+ "section_id": "4.1.1",
67
+ "parent_section_id": "4.1",
68
+ "section_name": "4.1.1 Baselines",
69
+ "text": "For a comprehensive comparison, we employ several state-of-the-art neural audio codecs as baselines:\nEnCodec 222https://huggingface.co/facebook/encodec_24khz ###reference_khz###: The open-source EnCodec model [10 ###reference_b10###], trained on a diverse range of 24kHz audio data, can compress audio to bitrates between 1.5 and 24.0 kbps while maintaining high fidelity.\nDAC 333https://github.com/descriptinc/descript-audio-codec ###reference_dio-codec###: The open-source DAC model [9 ###reference_b9###] utilizes enhanced VQ techniques. For our experiments, we employ the official 16kHz version.\nSpeechTokenizer 444https://github.com/ZhangXInFD/SpeechTokenizer ###reference_zer###: This model [13 ###reference_b13###] is a unified speech tokenizer that leverages distinct VQ layers to separate speech into content and timbre components. We utilize their official checkpoints in our evaluations."
70
+ },
71
+ {
72
+ "section_id": "4.1.2",
73
+ "parent_section_id": "4.1",
74
+ "section_name": "4.1.2 Training Details of X-Codec",
75
+ "text": "Given our objective to assess the efficacy of X-Codec in leveraging semantic information, we meticulously align our experimental setup with that used for SpeechTokenizer. Both models are trained on the same dataset, LibriSpeech, and utilize the same pre-trained self-supervised representations from HuBERT-base-ls960 555https://huggingface.co/facebook/hubert-base-ls960 ###reference_e-ls960###. To ensure comparability, we also adopt the strategy of employing the average representation across various layers of HuBERT as our semantic training objective."
76
+ },
77
+ {
78
+ "section_id": "4.1.3",
79
+ "parent_section_id": "4.1",
80
+ "section_name": "4.1.3 Training Details of VALL-E",
81
+ "text": "For reproduction of the VALL-E, we utilize the resources specified in the provided repository 666https://github.com/lifeiteng/vall-e. The training data is the LibriTTS, retaining the default settings as specified in the repository, except for the learning rate during the AR stage, which is adjusted to 0.01 to enhance model stability. The training process span 100 epochs for the AR stage and 200 epochs for the non-autoregressive (NAR) stage, same for all audio codecs for a fair comparison."
82
+ },
83
+ {
84
+ "section_id": "4.1.4",
85
+ "parent_section_id": "4.1",
86
+ "section_name": "4.1.4 Evaluation Metrics",
87
+ "text": "To assess the performances of zero-shot TTS systems, we employ the following metrics:\nWER (Word Error Rate): We utilize an Automatic Speech Recognition (ASR) model to transcribe the generated audio [6 ###reference_b6###]. The discrepancies between these transcriptions and the original texts are quantified using WER, providing a critical measure of audio intelligibility.\nSim-O (Similarity Objective): This metric assesses the objective similarity between synthesized speech and the original reference speech. Sim-O uses feature embeddings extracted from a pre-trained speaker verification model to measure this similarity [26 ###reference_b26###, 20 ###reference_b20###]777https://github.com/microsoft/UniSpeech/tree/main/downstreams/speaker_verification ###reference_e/main/downstreams/speaker_verification###, reflecting the codec\u2019s ability to preserve speaker characteristics.\nUTMOS: We evaluate the audio quality using UTMOS, a Speech MOS (Mean Opinion Score) predictor [28 ###reference_b28###]888https://github.com/tarepan/SpeechMOS ###reference_### that automatically measures the naturalness of speech. This metric provides insights into the overall auditory quality of the synthesized speech."
88
+ },
89
+ {
90
+ "section_id": "4.1.5",
91
+ "parent_section_id": "4.1",
92
+ "section_name": "4.1.5 Zero-shot TTS Results",
93
+ "text": "We use librispeech-test-clean [29 ###reference_b29###]for zero-shot TTS evaluation following VALL-E-continual-setting [6 ###reference_b6###]. The results in Table 1 ###reference_### demonstrate the following key findings:\nWhen comparing both X-Codec and SpeechTokenizer against the baseline and other acoustic codecs like DAC and Encodec, we observe improvements in WER. This supports our hypothesis that integrating semantic information helps audio LLMs better understand content.\nComparing the baseline acoustic codec and SpeechTokenizer, SpeechTokenizer exhibited lower Sim-O scores. We attribute this reduction to its initial disentanglement phase, which exclusively focuses on content prediction. This specialization potentially hampers the NAR phase\u2019s ability to accurately reconstruct speaker timbre when conditioned solely on tokens derived from the primary content-focused stage, resulting in poor speaker similarity.\nX-Codec not only shows better WER but also higher Sim-O and UTMOS scores compared to SpeechTokenizer. This confirms the effectiveness of our approach, indicating that our codec handles the integration of semantic and acoustic information more proficiently."
94
+ },
95
+ {
96
+ "section_id": "4.1.6",
97
+ "parent_section_id": "4.1",
98
+ "section_name": "4.1.6 Analysing the Effect of Codec",
99
+ "text": "To further analyse the above results caused by different audio codecs, we evaluate phonetic discriminability using the ABX error rate [30 ###reference_b30###]. This metric assesses how well different codecs can distinguish between similar phonetic sounds within and across various contexts. We specifically examine the continuous representations for VQ as indicated by the results in the following table 2 ###reference_###. We compare the performance of various models in terms of within and across phonetic discriminability:\nKey insights include:\nBoth SpeechTokenizer and X-Codec significantly outperform pure acoustic codecs like Encodec and DAC in phonetic discriminability, which supports our claim that enhancing semantic understanding in codecs helps modelling content such as phonetic details.\nThe X-Codec demonstrates a notable trend of improved phonetic discriminability with an increase in the number of quantizations (nq). Specifically, as nq increases from 1 to 8, the ABX error rates consistently decrease, thereby highlighting effectiveness of the X-Codec\u2019s design in enhancing semantic integration across multiple quantization layers.\nIn contrast, the SpeechTokenizer, while exhibiting commendable performance at a lower quantization level (nq = 1), fails to show significant improvement as nq is increased. This suggests a design limitation; the codec\u2019s reliance on the initial quantization to carry semantic information restricts its ability to process a broader spectrum of semantic information. Notably, the performance of X-Codec at nq = 8 significantly exceeds that of SpeechTokenizer.\nThese results underline the effectiveness of our method in facilitating enhanced semantic integration, leading to better phonetic discriminability and audio LLMs. In addition, these results also show that our simple concatenate methods surpass disentangle methods such as speechtokenizer."
100
+ },
101
+ {
102
+ "section_id": "4.2",
103
+ "parent_section_id": "4",
104
+ "section_name": "Music and Sound Generation",
105
+ "text": "To the best of our knowledge, this is the first exploration into the potential benefits of incorporating semantic information into audio codecs for enhancing music and general sound generation through audio LLMs. Conventional methods for general audio representation learning, aiming at capturing the semantic discriminability of audios, are generally based on 2D mel-spectrogram, such as AudioMAE [25 ###reference_b25###] and Beats [31 ###reference_b31###]. These methods are in stark contrast to traditional codecs that process audio sequentially, frame-by-frame. This difference poses challenges for direct integration into existing audio generation frameworks.\nTo bridge this gap, we have developed a variant of HuBERT, specifically adapted for general audio, which we refer to as HuBERT-General-Audio. This HuBERT-General-Audio is trained on an expansive internal dataset of approximately 200,000 hours, with a similar distribution as AudioSet. Additionally, our proposed X-Codec is also trained using these data for 400,000 steps until convergence, incorporating the HuBERT-General-Audio model within its semantic module. For a fair comparison, we train a baseline acoustic codec under identical settings but excluding semantic information."
106
+ },
107
+ {
108
+ "section_id": "4.2.1",
109
+ "parent_section_id": "4.2",
110
+ "section_name": "4.2.1 Training Details of Self-Supervised General Audio Representation",
111
+ "text": "HuBERT-General-Audio is trained using 8 NVIDIA H800 GPUs on 2.6 million tokens across 325,000 iterations. For training stability, we adopt an inverse square root learning schedule, a modification from the polynomial decay schedule originally utilized in [26 ###reference_b26###]. The learning rate is set at 0.0003 with warmup steps of 32,000. Unlike the original HuBERT, which utilizes MFCCs as the training target unit designed specifically for speech, our model leverages the first VQ layer of Encodec as the training target for acoustic unit discovery in the general audio. This choice eliminates the need for the K-means discretization step, saving significant time and computational resources."
112
+ },
113
+ {
114
+ "section_id": "4.2.2",
115
+ "parent_section_id": "4.2",
116
+ "section_name": "4.2.2 Music Continuation",
117
+ "text": "Training Details:\nAcquiring high-quality text-music pair data is challenging; therefore, we gathered approximately 100,000 hours of music-only data, including about one million songs for the music continuation task. We deployed nanoGPT 999https://github.com/karpathy/nanoGPT to implement a GPT-2-medium (approximately 300M parameters) [32 ###reference_b32###] as our generative model. This model utilizes the first VQ from our codec to construct the training sequences, with additional experiments involving multiple VQs detailed in the appendix. We set the block size of sequence modelling to 4096, corresponding to roughly 82 seconds of audio, and adjust the vocabulary size from 50,257 to 1024, matching our codec\u2019s codebook size. Other training hyperparameters are consistent with previous GPT-2-medium configurations. We train 300,000 steps on 8 NVIDIA H800 GPUs. The batch size is set to 20, with a learning rate of 3e-4 and a warmup phase of 2000 steps.\nExperiments:\nFor music continuation, we randomly crop 600 samples with each 40 seconds in duration from the MUSDB18 dataset [33 ###reference_b33###]. The initial 10 seconds of each sample are used as prompts for the audio LLM, while the subsequent 30 seconds are generated by the model. These generated segments are then compared against the corresponding ground truth (GT) segments. To ensure that the assessment is independent of the codec\u2019s reconstruction fidelity, both the generated and GT audio are reconstructed using the first VQ layer of the codec, ensuring performance differences attributed solely to the generative models themselves.\nThe evaluation metrics of the generated music include: Frechet Distance (FD) computed using features from Pretrained Audio Neural Networks (PANNs) [34 ###reference_b34###], Frechet Audio Distance (FAD), and FD-MERT Layer 9 [35 ###reference_b35###]. The results, as summarized in Table 7 ###reference_###, reveal that the X-Codec significantly outperforms the baseline acoustic codec across all metrics. This superior performance indicates the X-Codec has a better understanding and enabling more effective reproduction of complex musical structures."
118
+ },
119
+ {
120
+ "section_id": "4.2.3",
121
+ "parent_section_id": "4.2",
122
+ "section_name": "4.2.3 Text-to-Sound",
123
+ "text": "Training Details:\nStill, GPT-2-medium (approximately 300M parameters) are adopted for conditional text-to-sound tasks, where the condition embedding is extracted from text captions using LAION-CLAP [48 ###reference_b48###] and linearly projected from 512 dimensions to 1024 dimensions for GPT input. The training data consists of approximately 400 hours of audio content sourced from the AudioCaps dataset [49 ###reference_b49###] and the AudioSet SL subset from the WavsCaps dataset [50 ###reference_b50###]. All audio samples are uniformly resampled to a 16kHz sampling rate. The first four tokens from the VQ layers are preprocessed and flattened to configure the GPT model\u2019s block size to 2000, corresponding to a processing rate of 50Hz. The training process spans 80,000 steps on four NVIDIA 4090 GPUs, with a batch size of 8 and a learning rate of 3e-4. A warmup phase of 2000 steps is employed to optimize the training process.\nEvaluation Metrics:\nfollowing [51 ###reference_b51###] and [52 ###reference_b52###], we calculate Frechet Distance (FD), Inception Score (IS), Frechet Audio Distance (FAD) for text-to-audio generation. In addition, CLAP score [51 ###reference_b51###] is used to evaluate the correspondence between the generated audio and the text prompt.\nExperiment Results:\nAs shown in Table 4 ###reference_###, the proposed X-Codec significantly outperforms the baseline acoustic codec across all metrics. These results demonstrate that semantic information integration significantly enhances the codec\u2019s capability, underscoring the value of semantic enrichment in audio generation tasks."
124
+ },
125
+ {
126
+ "section_id": "4.2.4",
127
+ "parent_section_id": "4.2",
128
+ "section_name": "4.2.4 Analysing the Effect of Codec",
129
+ "text": "We hypothesize that the enhanced audio generation capabilities of the audio LLMs are attributed to the improved semantic understanding facilitated by the X-Codec. To validate this hypothesis, we employ the ARCH benchmark [53 ###reference_b53###] to evaluate the audio semantic understanding, and the benchmark is a comprehensive framework specifically designed to evaluate automatic recognition learning methods across a diverse range of audio classification domains, including acoustic events, music, and speech. The results from this benchmark are shown in Table 5 ###reference_###.\nOur findings indicate that HuBERT-general-audio significantly outperforms traditional acoustic codecs such as DAC, Encodec, and the baseline acoustic codec across all metrics. This improvement highlights the enhanced semantic understanding of X-Codec for general audio, which appears to be lacking in conventional acoustic audio codecs.\nMoreover, X-Codec achieves performance that is comparable or even superior to HuBERT-general-audio, confirming the effectiveness of our approach to enhancing semantic processing within codecs. This equivalence or improvement indicates the capability of X-Codec to integrate semantic information robustly."
130
+ },
131
+ {
132
+ "section_id": "4.3",
133
+ "parent_section_id": "4",
134
+ "section_name": "Limitation",
135
+ "text": "While our method significantly enhances the performance of codecs for LLMs by integrating semantic information, it does come with certain trade-offs. According to the principle of \"no free lunch,\" improving one aspect of a system often involves compromises in others. In the case of our enhanced codecs, the primary limitation lies in their potential impact on the original functionality of codecs, which is compression for information transmission. The introduction of a semantic extraction layer adds additional computational overhead, potentially increasing the time required for processing. This can affect the efficiency of the codec when used in applications where rapid data compression and transmission are critical. Consequently, while our approach offers substantial benefits for semantic understanding and audio processing, it may not be as effective in contexts where high-speed data compression is paramount.\nFurthermore, the integration of semantic layers can slightly impair certain acoustic metrics such as Mel and STFT distance, which are crucial for maintaining the fidelity of compressed audio. However, it is essential to note that these trade-offs are counterbalanced by significant improvements in human auditory perception, as evidenced by the UTMOS scores."
136
+ },
137
+ {
138
+ "section_id": "5",
139
+ "parent_section_id": null,
140
+ "section_name": "Conclusion",
141
+ "text": "In this paper, we introduced X-codec, an advanced audio codec that integrates semantic information through self-supervised learning models to enhance performance in large language models, specifically in text-to-speech synthesis, music continuation, and general audio classification tasks. Our evaluations demonstrate that X-codec significantly improves semantic understanding and audio generation quality across a variety of domains."
142
+ },
143
+ {
144
+ "section_id": "6",
145
+ "parent_section_id": null,
146
+ "section_name": "Model Details",
147
+ "text": "Acoustic Encoder: Following the design principles in [9 ###reference_b9###], our encoder comprises four convolutional encoder blocks, each tailored to progressively downsample the input audio waveform at rates [2, 4, 5, 8]. This structured reduction ensures efficient encoding while preserving essential audio characteristics. The final output from the encoder has a hidden size of 256, ensuring detailed feature representation. The total model size for the encoder stands at 12.18MB.\nResidual Vector Quantizer (RVQ): A key component in our X-codec, the RVQ utilizes the techniques established by [8 ###reference_b8###]. We update the codebook entries using an exponential moving average and apply a straight-through estimator to facilitate the gradient computation during backpropagation. To bolster training effectiveness and adaptability, commitment loss is incorporated, and RVQ layers are randomly selected from options [1, 2, 3, 4, 8] during training. This variability allows the model to adapt more dynamically to different audio characteristics.\nAcoustic Decoder: The decoder is designed to mirror the encoder\u2019s architecture, featuring four layers that upsample the audio data at inverse rates [8, 5, 4, 2]. This symmetry between the encoder and decoder helps in effectively reconstructing the audio signal from its encoded state. The decoder\u2019s model size is approximately 19.27 MB.\nSemantic Encoder and Decoder: To further refine the semantic aspects of the audio signals, we incorporate two additional convolutional blocks within both the semantic encoder and decoder, each with a hidden size of 768. This setup enhances the model\u2019s ability to process and integrate semantic information effectively."
148
+ },
149
+ {
150
+ "section_id": "7",
151
+ "parent_section_id": null,
152
+ "section_name": "Music Continue with 4 VQ Flattern",
153
+ "text": "In this section, we expand our music continuation experiments to include conditions with four flattened vector quantizations (VQs) to further validate the effectiveness of our approach. While the experimental details remain consistent with previous setups, the use of four VQs necessitates a shorter segment length of approximately 20 seconds due to increased data density. We prompt the model with 5 seconds of audio and generate 5 seconds, aiming to assess the performance enhancement under these conditions.\nThe results underscore a noticeable improvement in performance with the X-codec, particularly highlighted by significant enhancements in Frechet Audio Distance (FAD) and perceptual quality metrics, suggesting that our X-codec not only maintains but also amplifies its efficacy in generating musically coherent and contextually rich outputs."
154
+ }
155
+ ],
156
+ "appendix": [],
157
+ "tables": {
158
+ "1": {
159
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Objective performance comparison on <span class=\"ltx_text ltx_font_italic\" id=\"S4.T1.9.1\">continuation</span> zero-shot speech\nsynthesis tasks using VALL-E trained on LibriTTS <span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.10.2\">with different audio codec</span>. Abbreviation: C (Common Voice), DNS (DNS Challenge 4 speech), AS (AudioSet), FSD (FSD50K), J (Jamendo), V (VCTK), M(MUSDB) </figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.6\" style=\"width:390.3pt;height:94.7pt;vertical-align:-0.6pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-140.9pt,34.0pt) scale(0.580760164914092,0.580760164914092) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T1.6.6.7.1.1\">Codec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.6.6.7.1.2\">Training Data of</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S4.T1.6.6.7.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.7.1.3.1\">VALL-E AR stage</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S4.T1.6.6.7.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.6.6.7.1.4.1\">VALL-E AR+NAR stages</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.6\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T1.6.6.6.7\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.6.6.6.8\">Audio Codec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.1.1.1.1\">WER \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.2.2.2.2\">SIM-O \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.3.3.3.3\">UTMOS \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.4.4.4.4\">WER \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.5.5.5.5\">SIM-O \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.6.6.6.6\">UTMOS \n</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.8.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T1.6.6.8.2.1\">GT</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.8.2.2\">-</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.8.2.3\">2.23</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.8.2.4\">0.67</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.8.2.5\">4.10</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.8.2.6\">2.23</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.8.2.7\">0.67</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.6.8.2.8\">4.10</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.9.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.6.6.9.1.1\">Encodec <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib10\" title=\"\">10</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.9.1.2\">C+DNS+AS+FSD+J</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.9.1.3\">47.17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.9.1.4\">0.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.9.1.5\">1.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.9.1.6\">6.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.9.1.7\">0.33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.6.6.9.1.8\">3.02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.10.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.6.6.10.2.1\">DAC <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib9\" title=\"\">9</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.2.2\">C+DNS+V+AS+J+M</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.2.3\">85.55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.2.4\">0.03</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.2.5\">1.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.2.6\">6.81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.2.7\">0.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.10.2.8\">3.31</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.11.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.6.6.11.3.1\">Speechtokenizer <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib13\" title=\"\">13</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.3.2\">LibriSpeech</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.3.3\">7.53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.3.4\">0.10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.3.5\">1.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.3.6\">5.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.3.7\">0.36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.11.3.8\">3.84</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.12.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T1.6.6.12.4.1\">Baseline Acoustic Codec</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.6.6.12.4.2\">LibriSpeech</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.6.6.12.4.3\">22.32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.6.6.12.4.4\">0.16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.6.6.12.4.5\">3.15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.6.6.12.4.6\">7.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.6.6.12.4.7\">0.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.6.6.12.4.8\">3.89</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.13.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.6.6.13.5.1\">X-Codec-hubert</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.13.5.2\">LibriSpeech</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.13.5.3\">5.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.13.5.4\">0.22</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.13.5.5\">3.85</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.13.5.6\">4.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.13.5.7\">0.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.13.5.8\">4.16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.14.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.6.6.14.6.1\">X-Codec-wavlm-base-plus</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.14.6.2\">MLS English</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.14.6.3\">4.83</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.14.6.4\">0.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.14.6.5\">4.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.14.6.6\">3.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.14.6.7\">0.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.6.6.14.6.8\">4.22</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
160
+ "capture": "Table 1: Objective performance comparison on continuation zero-shot speech\nsynthesis tasks using VALL-E trained on LibriTTS with different audio codec. Abbreviation: C (Common Voice), DNS (DNS Challenge 4 speech), AS (AudioSet), FSD (FSD50K), J (Jamendo), V (VCTK), M(MUSDB) "
161
+ },
162
+ "2": {
163
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.3\" style=\"width:173.4pt;height:99.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-101.6pt,58.3pt) scale(0.460541594043471,0.460541594043471) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.3.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.3.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.3.4.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.2.2.1\">within </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.3.3.1\">across </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.4.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.3.3.4.1.1\">hubert-ls-960 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib26\" title=\"\">26</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.3.3.4.1.2\">-</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.4.1.3\">3.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.4.1.4\">4.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.5.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.3.3.5.2.1\">Encodec <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib10\" title=\"\">10</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.3.5.2.2\">1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.5.2.3\">21.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.5.2.4\">28.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.6.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.3.3.6.3.1\">Encodec <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib10\" title=\"\">10</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.3.6.3.2\">8</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.6.3.3\">17.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.6.3.4\">27.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.7.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.3.3.7.4.1\">DAC <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib9\" title=\"\">9</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.3.7.4.2\">1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.7.4.3\">26.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.7.4.4\">32.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.8.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.3.3.8.5.1\">DAC <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib9\" title=\"\">9</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.3.8.5.2\">12</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.8.5.3\">21.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.8.5.4\">33.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.9.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.3.3.9.6.1\">Speechtoknizer <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib13\" title=\"\">13</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.3.9.6.2\">1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.9.6.3\">3.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.9.6.4\">4.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.10.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.3.3.10.7.1\">Speechtoknizer<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib13\" title=\"\">13</a>]</cite>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.3.10.7.2\">8</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.10.7.3\">3.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.10.7.4\">4.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.11.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.3.3.11.8.1\">Baseline Acoustic Codec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.3.3.11.8.2\">1</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.11.8.3\">26.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.11.8.4\">31.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.12.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.3.3.12.9.1\">Baseline Acoustic Codec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.3.12.9.2\">8</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.12.9.3\">20.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.12.9.4\">28.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.13.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.3.3.13.10.1\">X-Codec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.3.13.10.2\">1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.13.10.3\">3.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.13.10.4\">4.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.14.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T2.3.3.14.11.1\">X-Codec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S4.T2.3.3.14.11.2\">8</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.3.14.11.3\">3.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.3.14.11.4\">4.3</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Comparison of Phonetic Discriminability within and across ABX error rate for various models, with different values. Lower values indicate better performance.</figcaption>\n</figure>",
164
+ "capture": "Table 2: Comparison of Phonetic Discriminability within and across ABX error rate for various models, with different values. Lower values indicate better performance."
165
+ },
166
+ "3": {
167
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.3\" style=\"width:173.4pt;height:36.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-42.6pt,8.9pt) scale(0.670607194660211,0.670607194660211) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.3.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T3.3.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.3.3.4.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.1.1.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.1.1\">FD</span> \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.2.2.2.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.2.2.1\">FAD</span> \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.3.3.3.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.3.3.3.1\">FD-MERT-layer-9</span> \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.4.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.3.3.4.1.1\">Acoustic codec</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.4.1.2\">16.17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.4.1.3\">1.43</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.4.1.4\">2.88</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.5.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T3.3.3.5.2.1\">X-Codec</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.3.3.5.2.2\">12.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.3.3.5.2.3\">1.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.3.3.5.2.4\">2.62</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Comparison between baseline acoustic codec and our X-Codec on music continue.</figcaption>\n</figure>",
168
+ "capture": "Table 3: Comparison between baseline acoustic codec and our X-Codec on music continue."
169
+ },
170
+ "4": {
171
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T4.4\" style=\"width:151.8pt;height:35.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-40.0pt,9.3pt) scale(0.654962706199127,0.654962706199127) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T4.4.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.4.5.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.1.1.1.1\">FD</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.2.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.2.2.2.2.1\">IS</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.3.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.3.3.3.3.1\">FAD</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.4.4.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.4.4.1\">CLAP</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4.5.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T4.4.4.5.1.1\">Acoustic codec</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.4.5.1.2\">59.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.4.5.1.3\">3.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.4.5.1.4\">6.19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.4.5.1.5\">0.417</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4.6.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T4.4.4.6.2.1\">X-Codec</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.4.4.6.2.2\">46.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.4.4.6.2.3\">5.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.4.4.6.2.4\">4.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.4.4.6.2.5\">0.483</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Comparison between baseline acoustic codec and X-Codec on text-to-sound tasks.</figcaption>\n</figure>",
172
+ "capture": "Table 4: Comparison between baseline acoustic codec and X-Codec on text-to-sound tasks."
173
+ },
174
+ "5": {
175
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T5.1\" style=\"width:433.6pt;height:64.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-147.9pt,21.9pt) scale(0.594428793127601,0.594428793127601) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T5.1.1.1.1.1\">Model/Datasets</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.2\">ESC-50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.3\">US8K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.4\">FSD50K</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.5\">VIVAE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.6\">FMA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.7\">MTT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.8\">IRMAS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.9\">MS-DB</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.10\">RAVDESS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.11\">A-MNIST</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.12\">SLURP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.1.1.13\">EMOVO</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T5.1.1.2.2.1\">DAC <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib9\" title=\"\">9</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.2\">27.65</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.3\">45.16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.4\">7.08</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.5\">30.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.6\">38.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.7\">27.69</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.8\">30.33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.9\">51.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.10\">37.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.11\">73.59</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.12\">7.72</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.2.2.13\">23.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.1.1.3.3.1\">Encodec <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib10\" title=\"\">10</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.2\">30.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.3\">64.47</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.4\">8.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.5\">31.59</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.6\">39.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.7\">26.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.8\">28.16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.9\">64.47</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.10\">31.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.11\">78.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.12\">8.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.3.3.13\">25.51</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T5.1.1.4.4.1\">Baseline Acoustic Codec</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.2\">40.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.3\">55.38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.4\">11.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.5\">37.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.6\">48.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.7\">32.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.8\">36.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.9\">62.77</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.10\">47.22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.11\">82.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.12\">9.28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.1.1.4.4.13\">24.83</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.1.1.5.5.1\">Hubert-general-audio</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.2\">69.95</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.3\">74.87</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.4\">34.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.5\">48.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.6\">64.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.7\">43.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.8\">49.80</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.9\">74.09</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.10\">69.10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.11\">99.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.12\">21.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.5.5.13\">35.03</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T5.1.1.6.6.1\">X-Codec</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.2\">69.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.3\">75.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.4\">34.05</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.5\">49.40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.6\">64.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.7\">42.95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.8\">52.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.9\">75.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.10\">68.40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.11\">99.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.12\">21.65</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.1.1.6.6.13\">35.20</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Performance of semantic representation on the ARCH benchmark. The table shows the performance of various models across different domains. ESC-50 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib36\" title=\"\">36</a>]</cite>, US8K <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib37\" title=\"\">37</a>]</cite>, FSD50K <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib38\" title=\"\">38</a>]</cite>, and VIVAE <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib39\" title=\"\">39</a>]</cite> represent performance on Acoustic Events. FMA <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib40\" title=\"\">40</a>]</cite>, MTT <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib41\" title=\"\">41</a>]</cite>, IRMAS <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib42\" title=\"\">42</a>]</cite>, and MS-DB <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib43\" title=\"\">43</a>]</cite> indicate performance in the Music domain. RAVDESS <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib44\" title=\"\">44</a>]</cite>, AudioMNIST <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib45\" title=\"\">45</a>]</cite>, SLURP <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib46\" title=\"\">46</a>]</cite>, and EMOVO <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.17175v3#bib.bib47\" title=\"\">47</a>]</cite> reflect performance in the Speech domain. Higher values indicate better performance across these tasks.</figcaption>\n</figure>",
176
+ "capture": "Table 5: Performance of semantic representation on the ARCH benchmark. The table shows the performance of various models across different domains. ESC-50 [36], US8K [37], FSD50K [38], and VIVAE [39] represent performance on Acoustic Events. FMA [40], MTT [41], IRMAS [42], and MS-DB [43] indicate performance in the Music domain. RAVDESS [44], AudioMNIST [45], SLURP [46], and EMOVO [47] reflect performance in the Speech domain. Higher values indicate better performance across these tasks."
177
+ },
178
+ "6": {
179
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T6.8\" style=\"width:151.8pt;height:56.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-45.1pt,16.8pt) scale(0.627051781895328,0.627051781895328) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T6.8.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T6.8.8.9.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T6.8.8.9.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.8.8.9.1.1.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T6.8.8.9.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.8.8.9.1.2.1\">Mel DT.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T6.8.8.9.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.8.8.9.1.3.1\">STFT DT.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T6.8.8.9.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.8.8.9.1.4.1\">UTMOS</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T6.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T6.2.2.2.2\">Baseline (=)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.2.2.2.3\">0.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.2.2.2.4\">0.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.2.2.2.5\">2.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T6.4.4.4.2\">Baseline (=)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.4.4.3\">0.54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.4.4.4\">0.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.4.4.5\">3.72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T6.6.6.6.2\">X-codec (=)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.6.6.6.3\">0.86</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.6.6.6.4\">0.77</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.6.6.6.5\">3.71</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T6.8.8.8.2\">X-codec (=)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T6.8.8.8.3\">0.62</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T6.8.8.8.4\">0.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T6.8.8.8.5\">4.01</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Comparison of reconstruction based on Mel DT., STFT DT., and UTMOS metrics using 1000 LibriTTS speech samples. \u201cDT.\u201d is short for distance</figcaption>\n</figure>",
180
+ "capture": "Table 6: Comparison of reconstruction based on Mel DT., STFT DT., and UTMOS metrics using 1000 LibriTTS speech samples. \u201cDT.\u201d is short for distance"
181
+ },
182
+ "7": {
183
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T7\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span>Comparison between baseline acoustic codec and our X-codec on music continuation.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S7.T7.3\" style=\"width:173.4pt;height:36.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-42.5pt,8.9pt) scale(0.671328260800748,0.671328260800748) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S7.T7.3.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S7.T7.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S7.T7.3.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T7.3.3.3.4.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S7.T7.1.1.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S7.T7.1.1.1.1.1\">FD</span> \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S7.T7.2.2.2.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S7.T7.2.2.2.2.1\">FAD</span> \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S7.T7.3.3.3.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S7.T7.3.3.3.3.1\">FD-MERT-layer-9</span> \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T7.3.3.4.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S7.T7.3.3.4.1.1\">Acoustic codec</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T7.3.3.4.1.2\">3.86</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T7.3.3.4.1.3\">7.08</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T7.3.3.4.1.4\">1.12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T7.3.3.5.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S7.T7.3.3.5.2.1\">X-codec</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S7.T7.3.3.5.2.2\">3.47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S7.T7.3.3.5.2.3\">0.20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S7.T7.3.3.5.2.4\">0.79</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
184
+ "capture": "Table 7: Comparison between baseline acoustic codec and our X-codec on music continuation."
185
+ }
186
+ },
187
+ "image_paths": {
188
+ "1": {
189
+ "figure_path": "2408.17175v3_figure_1.png",
190
+ "caption": "Figure 1: The pipeline of X-codec.",
191
+ "url": "http://arxiv.org/html/2408.17175v3/x1.png"
192
+ }
193
+ },
194
+ "validation": true,
195
+ "references": [
196
+ {
197
+ "1": {
198
+ "title": "Language models are few-shot learners.",
199
+ "author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.",
200
+ "venue": "Advances in neural information processing systems, 33:1877\u20131901, 2020.",
201
+ "url": null
202
+ }
203
+ },
204
+ {
205
+ "2": {
206
+ "title": "A survey of large language models.",
207
+ "author": "Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al.",
208
+ "venue": "arXiv preprint arXiv:2303.18223, 2023.",
209
+ "url": null
210
+ }
211
+ },
212
+ {
213
+ "3": {
214
+ "title": "Visual instruction tuning.",
215
+ "author": "Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee.",
216
+ "venue": "Advances in neural information processing systems, 36, 2024a.",
217
+ "url": null
218
+ }
219
+ },
220
+ {
221
+ "4": {
222
+ "title": "Musiclm: Generating music from text.",
223
+ "author": "Andrea Agostinelli, Timo I Denk, Zal\u00e1n Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, et al.",
224
+ "venue": "arXiv preprint arXiv:2301.11325, 2023.",
225
+ "url": null
226
+ }
227
+ },
228
+ {
229
+ "5": {
230
+ "title": "Audiolm: a language modeling approach to audio generation.",
231
+ "author": "Zal\u00e1n Borsos, Rapha\u00ebl Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matt Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, et al.",
232
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023.",
233
+ "url": null
234
+ }
235
+ },
236
+ {
237
+ "6": {
238
+ "title": "Neural codec language models are zero-shot text to speech synthesizers.",
239
+ "author": "Chengyi Wang, Sanyuan Chen, Yu Wu, Ziqiang Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, et al.",
240
+ "venue": "arXiv preprint arXiv:2301.02111, 2023.",
241
+ "url": null
242
+ }
243
+ },
244
+ {
245
+ "7": {
246
+ "title": "Uniaudio: An audio foundation model toward universal audio generation.",
247
+ "author": "Dongchao Yang, Jinchuan Tian, Xu Tan, Rongjie Huang, Songxiang Liu, Xuankai Chang, Jiatong Shi, Sheng Zhao, Jiang Bian, Xixin Wu, et al.",
248
+ "venue": "arXiv preprint arXiv:2310.00704, 2023a.",
249
+ "url": null
250
+ }
251
+ },
252
+ {
253
+ "8": {
254
+ "title": "Soundstream: An end-to-end neural audio codec.",
255
+ "author": "Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi.",
256
+ "venue": "IEEE/ACM Trans. Audio, Speech, Lang. Process., 30:495\u2013507, 2021.",
257
+ "url": null
258
+ }
259
+ },
260
+ {
261
+ "9": {
262
+ "title": "High-fidelity audio compression with improved rvqgan.",
263
+ "author": "Rithesh Kumar, Prem Seetharaman, Alejandro Luebs, Ishaan Kumar, and Kundan Kumar.",
264
+ "venue": "Advances in Neural Information Processing Systems, 36, 2024.",
265
+ "url": null
266
+ }
267
+ },
268
+ {
269
+ "10": {
270
+ "title": "High fidelity neural audio compression.",
271
+ "author": "Alexandre D\u00e9fossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi.",
272
+ "venue": "arXiv preprint arXiv:2210.13438, 2022.",
273
+ "url": null
274
+ }
275
+ },
276
+ {
277
+ "11": {
278
+ "title": "Hifi-codec: Group-residual vector quantization for high fidelity audio codec.",
279
+ "author": "Dongchao Yang, Songxiang Liu, Rongjie Huang, Jinchuan Tian, Chao Weng, and Yuexian Zou.",
280
+ "venue": "arXiv preprint arXiv:2305.02765, 2023b.",
281
+ "url": null
282
+ }
283
+ },
284
+ {
285
+ "12": {
286
+ "title": "Simple and controllable music generation.",
287
+ "author": "Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre D\u00e9fossez.",
288
+ "venue": "Advances in Neural Information Processing Systems, 36, 2024.",
289
+ "url": null
290
+ }
291
+ },
292
+ {
293
+ "13": {
294
+ "title": "Speechtokenizer: Unified speech tokenizer for speech large language models.",
295
+ "author": "Xin Zhang, Dong Zhang, Shimin Li, Yaqian Zhou, and Xipeng Qiu.",
296
+ "venue": "arXiv preprint arXiv:2308.16692, 2023.",
297
+ "url": null
298
+ }
299
+ },
300
+ {
301
+ "14": {
302
+ "title": "Audiopalm: A large language model that can speak and listen.",
303
+ "author": "Paul K Rubenstein, Chulayuth Asawaroengchai, Duc Dung Nguyen, Ankur Bapna, Zal\u00e1n Borsos, F\u00e9lix de Chaumont Quitry, Peter Chen, Dalia El Badawy, Wei Han, Eugene Kharitonov, et al.",
304
+ "venue": "arXiv preprint arXiv:2306.12925, 2023.",
305
+ "url": null
306
+ }
307
+ },
308
+ {
309
+ "15": {
310
+ "title": "Speechlm: Enhanced speech pre-training with unpaired textual data.",
311
+ "author": "Ziqiang Zhang, Sanyuan Chen, Long Zhou, Yu Wu, Shuo Ren, Shujie Liu, Zhuoyuan Yao, Xun Gong, Lirong Dai, Jinyu Li, et al.",
312
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024.",
313
+ "url": null
314
+ }
315
+ },
316
+ {
317
+ "16": {
318
+ "title": "On decoder-only architecture for speech-to-text and large language model integration.",
319
+ "author": "Jian Wu, Yashesh Gaur, Zhuo Chen, Long Zhou, Yimeng Zhu, Tianrui Wang, Jinyu Li, Shujie Liu, Bo Ren, Linquan Liu, et al.",
320
+ "venue": "In 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 1\u20138. IEEE, 2023a.",
321
+ "url": null
322
+ }
323
+ },
324
+ {
325
+ "17": {
326
+ "title": "Speechgen: Unlocking the generative power of speech language models with prompts.",
327
+ "author": "Haibin Wu, Kai-Wei Chang, Yuan-Kuei Wu, and Hung-yi Lee.",
328
+ "venue": "arXiv preprint arXiv:2306.02207, 2023b.",
329
+ "url": null
330
+ }
331
+ },
332
+ {
333
+ "18": {
334
+ "title": "Lauragpt: Listen, attend, understand, and regenerate audio with gpt.",
335
+ "author": "Qian Chen, Yunfei Chu, Zhifu Gao, Zerui Li, Kai Hu, Xiaohuan Zhou, Jin Xu, Ziyang Ma, Wen Wang, Siqi Zheng, et al.",
336
+ "venue": "arXiv preprint arXiv:2310.04673, 2023.",
337
+ "url": null
338
+ }
339
+ },
340
+ {
341
+ "19": {
342
+ "title": "Speak, read and prompt: High-fidelity text-to-speech with minimal supervision.",
343
+ "author": "Eugene Kharitonov, Damien Vincent, Zal\u00e1n Borsos, Rapha\u00ebl Marinier, Sertan Girgin, Olivier Pietquin, Matt Sharifi, Marco Tagliasacchi, and Neil Zeghidour.",
344
+ "venue": "Transactions of the Association for Computational Linguistics, 11:1703\u20131718, 2023.",
345
+ "url": null
346
+ }
347
+ },
348
+ {
349
+ "20": {
350
+ "title": "Clam-tts: Improving neural codec language model for zero-shot text-to-speech.",
351
+ "author": "Jaehyeon Kim, Keon Lee, Seungjun Chung, and Jaewoong Cho.",
352
+ "venue": "arXiv preprint arXiv:2404.02781, 2024.",
353
+ "url": null
354
+ }
355
+ },
356
+ {
357
+ "21": {
358
+ "title": "Voicecraft: Zero-shot speech editing and text-to-speech in the wild.",
359
+ "author": "Puyuan Peng, Po-Yao Huang, Daniel Li, Abdelrahman Mohamed, and David Harwath.",
360
+ "venue": "arXiv preprint arXiv:2403.16973, 2024.",
361
+ "url": null
362
+ }
363
+ },
364
+ {
365
+ "22": {
366
+ "title": "Neural discrete representation learning.",
367
+ "author": "Aaron Van Den Oord, Oriol Vinyals, et al.",
368
+ "venue": "Advances in neural information processing systems, 30, 2017.",
369
+ "url": null
370
+ }
371
+ },
372
+ {
373
+ "23": {
374
+ "title": "Taming transformers for high-resolution image synthesis.",
375
+ "author": "Patrick Esser, Robin Rombach, and Bjorn Ommer.",
376
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12873\u201312883, 2021.",
377
+ "url": null
378
+ }
379
+ },
380
+ {
381
+ "24": {
382
+ "title": "Semanticodec: An ultra low bitrate semantic audio codec for general sound.",
383
+ "author": "Haohe Liu, Xuenan Xu, Yi Yuan, Mengyue Wu, Wenwu Wang, and Mark D Plumbley.",
384
+ "venue": "arXiv preprint arXiv:2405.00233, 2024b.",
385
+ "url": null
386
+ }
387
+ },
388
+ {
389
+ "25": {
390
+ "title": "Masked autoencoders that listen.",
391
+ "author": "Po-Yao Huang, Hu Xu, Juncheng Li, Alexei Baevski, Michael Auli, Wojciech Galuba, Florian Metze, and Christoph Feichtenhofer.",
392
+ "venue": "Advances in Neural Information Processing Systems, 35:28708\u201328720, 2022.",
393
+ "url": null
394
+ }
395
+ },
396
+ {
397
+ "26": {
398
+ "title": "Hubert: Self-supervised speech representation learning by masked prediction of hidden units.",
399
+ "author": "Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed.",
400
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451\u20133460, 2021.",
401
+ "url": null
402
+ }
403
+ },
404
+ {
405
+ "27": {
406
+ "title": "wav2vec 2.0: A framework for self-supervised learning of speech representations.",
407
+ "author": "Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli.",
408
+ "venue": "Advances in neural information processing systems, 33:12449\u201312460, 2020.",
409
+ "url": null
410
+ }
411
+ },
412
+ {
413
+ "28": {
414
+ "title": "Utmos: Utokyo-sarulab system for voicemos challenge 2022.",
415
+ "author": "Takaaki Saeki, Detai Xin, Wataru Nakata, Tomoki Koriyama, Shinnosuke Takamichi, and Hiroshi Saruwatari.",
416
+ "venue": "arXiv preprint arXiv:2204.02152, 2022.",
417
+ "url": null
418
+ }
419
+ },
420
+ {
421
+ "29": {
422
+ "title": "Librispeech: an asr corpus based on public domain audio books.",
423
+ "author": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.",
424
+ "venue": "In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206\u20135210. IEEE, 2015.",
425
+ "url": null
426
+ }
427
+ },
428
+ {
429
+ "30": {
430
+ "title": "Evaluating speech features with the minimal-pair abx task: Analysis of the classical mfc/plp pipeline.",
431
+ "author": "Thomas Schatz, Vijayaditya Peddinti, Francis Bach, Aren Jansen, Hynek Hermansky, and Emmanuel Dupoux.",
432
+ "venue": "In INTERSPEECH 2013: 14th Annual Conference of the International Speech Communication Association, pages 1\u20135, 2013.",
433
+ "url": null
434
+ }
435
+ },
436
+ {
437
+ "31": {
438
+ "title": "Beats: Audio pre-training with acoustic tokenizers.",
439
+ "author": "Sanyuan Chen, Yu Wu, Chengyi Wang, Shujie Liu, Daniel Tompkins, Zhuo Chen, and Furu Wei.",
440
+ "venue": "arXiv preprint arXiv:2212.09058, 2022.",
441
+ "url": null
442
+ }
443
+ },
444
+ {
445
+ "32": {
446
+ "title": "Language models are unsupervised multitask learners.",
447
+ "author": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.",
448
+ "venue": "OpenAI blog, 1(8):9, 2019.",
449
+ "url": null
450
+ }
451
+ },
452
+ {
453
+ "33": {
454
+ "title": "The musdb18 corpus for music separation.",
455
+ "author": "Zafar Rafii, Antoine Liutkus, Fabian-Robert St\u00f6ter, Stylianos Ioannis Mimilakis, and Rachel Bittner.",
456
+ "venue": "2017.",
457
+ "url": null
458
+ }
459
+ },
460
+ {
461
+ "34": {
462
+ "title": "Panns: Large-scale pretrained audio neural networks for audio pattern recognition.",
463
+ "author": "Qiuqiang Kong, Yin Cao, Turab Iqbal, Yuxuan Wang, Wenwu Wang, and Mark D Plumbley.",
464
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:2880\u20132894, 2020.",
465
+ "url": null
466
+ }
467
+ },
468
+ {
469
+ "35": {
470
+ "title": "Mert: Acoustic music understanding model with large-scale self-supervised training.",
471
+ "author": "Yizhi Li, Ruibin Yuan, Ge Zhang, Yinghao Ma, Xingran Chen, Hanzhi Yin, Chenghua Lin, Anton Ragni, Emmanouil Benetos, Norbert Gyenge, et al.",
472
+ "venue": "arXiv preprint arXiv:2306.00107, 2023.",
473
+ "url": null
474
+ }
475
+ },
476
+ {
477
+ "36": {
478
+ "title": "Esc: Dataset for environmental sound classification.",
479
+ "author": "Karol J. Piczak.",
480
+ "venue": "In ACM Multimedia, MM \u201915, New York, NY, USA, 2015. ACM.",
481
+ "url": null
482
+ }
483
+ },
484
+ {
485
+ "37": {
486
+ "title": "A dataset and taxonomy for urban sound research.",
487
+ "author": "Justin Salamon, Christopher Jacoby, and Juan Pablo Bello.",
488
+ "venue": "In ACM Multimedia, MM \u201914, New York, NY, USA, 2014. ACM.",
489
+ "url": null
490
+ }
491
+ },
492
+ {
493
+ "38": {
494
+ "title": "Fsd50k: An open dataset of human-labeled sound events.",
495
+ "author": "Eduardo Fonseca, Xavier Favory, Jordi Pons, Frederic Font, and Xavier Serra.",
496
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022.",
497
+ "url": null
498
+ }
499
+ },
500
+ {
501
+ "39": {
502
+ "title": "The variably intense vocalizations of affect and emotion (vivae) corpus prompts new perspective on nonspeech perception.",
503
+ "author": "Natalie Holz, Pauline Larrouy-Maestri, and David Poeppel.",
504
+ "venue": "Emotion, 2022.",
505
+ "url": null
506
+ }
507
+ },
508
+ {
509
+ "40": {
510
+ "title": "Fma: A dataset for music analysis.",
511
+ "author": "Micha\u00ebl Defferrard, Kirell Benzi, Pierre Vandergheynst, and Xavier Bresson.",
512
+ "venue": "In Proc. Intl. Soc. Music Information Retrieval Conf. (ISMIR), 2017.",
513
+ "url": null
514
+ }
515
+ },
516
+ {
517
+ "41": {
518
+ "title": "Evaluation of algorithms using games: The case of music tagging.",
519
+ "author": "Edith Law, Kris West, Michael I Mandel, Mert Bay, and J Stephen Downie.",
520
+ "venue": "In Proc. Intl. Soc. Music Information Retrieval Conf. (ISMIR), 2009.",
521
+ "url": null
522
+ }
523
+ },
524
+ {
525
+ "42": {
526
+ "title": "A comparison of sound segregation techniques for predominant instrument recognition in musical audio signals.",
527
+ "author": "Juan J Bosch, Jordi Janer, Ferdinand Fuhrmann, and Perfecto Herrera.",
528
+ "venue": "In Proc. Intl. Soc. Music Information Retrieval Conf. (ISMIR), 2012.",
529
+ "url": null
530
+ }
531
+ },
532
+ {
533
+ "43": {
534
+ "title": "Medleydb: A multitrack dataset for annotation-intensive mir research.",
535
+ "author": "Rachel M Bittner, Justin Salamon, Mike Tierney, Matthias Mauch, Chris Cannam, and Juan Pablo Bello.",
536
+ "venue": "In Proc. Intl. Soc. Music Information Retrieval Conf. (ISMIR), volume 14, pages 155\u2013160, 2014.",
537
+ "url": null
538
+ }
539
+ },
540
+ {
541
+ "44": {
542
+ "title": "The ryerson audio-visual database of emotional speech and song (ravdess): A dynamic, multimodal set of facial and vocal expressions in north american english.",
543
+ "author": "Steven R. Livingstone and Frank A. Russo.",
544
+ "venue": "PloS one, 2018.",
545
+ "url": null
546
+ }
547
+ },
548
+ {
549
+ "45": {
550
+ "title": "AudioMNIST: Exploring explainable artificial intelligence for audio analysis on a simple benchmark.",
551
+ "author": "S\u00f6ren Becker, Johanna Vielhaben, Marcel Ackermann, Klaus-Robert M\u00fcller, Sebastian Lapuschkin, and Wojciech Samek.",
552
+ "venue": "Journal of the Franklin Institute, 2024.",
553
+ "url": null
554
+ }
555
+ },
556
+ {
557
+ "46": {
558
+ "title": "SLURP: A spoken language understanding resource package.",
559
+ "author": "Emanuele Bastianelli, Andrea Vanzo, Pawel Swietojanski, and Verena Rieser.",
560
+ "venue": "In EMNLP. ACM, November 2020.",
561
+ "url": null
562
+ }
563
+ },
564
+ {
565
+ "47": {
566
+ "title": "EMOVO corpus: an Italian emotional speech database.",
567
+ "author": "Giovanni Costantini, Iacopo Iaderola, Andrea Paoloni, and Massimiliano Todisco.",
568
+ "venue": "In LREC. European Language Resources Association (ELRA), 2014.",
569
+ "url": null
570
+ }
571
+ },
572
+ {
573
+ "48": {
574
+ "title": "Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation.",
575
+ "author": "Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, and Shlomo Dubnov.",
576
+ "venue": "In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1\u20135. IEEE, 2023c.",
577
+ "url": null
578
+ }
579
+ },
580
+ {
581
+ "49": {
582
+ "title": "Audiocaps: Generating captions for audios in the wild.",
583
+ "author": "Chris Dongjoo Kim, Byeongchang Kim, Hyunmin Lee, and Gunhee Kim.",
584
+ "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 119\u2013132, 2019.",
585
+ "url": null
586
+ }
587
+ },
588
+ {
589
+ "50": {
590
+ "title": "Wavcaps: A chatgpt-assisted weakly-labelled audio captioning dataset for audio-language multimodal research.",
591
+ "author": "Xinhao Mei, Chutong Meng, Haohe Liu, Qiuqiang Kong, Tom Ko, Chengqi Zhao, Mark D Plumbley, Yuexian Zou, and Wenwu Wang.",
592
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024.",
593
+ "url": null
594
+ }
595
+ },
596
+ {
597
+ "51": {
598
+ "title": "Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models.",
599
+ "author": "Rongjie Huang, Jiawei Huang, Dongchao Yang, Yi Ren, Luping Liu, Mingze Li, Zhenhui Ye, Jinglin Liu, Xiang Yin, and Zhou Zhao.",
600
+ "venue": "arXiv preprint arXiv:2301.12661, 2023.",
601
+ "url": null
602
+ }
603
+ },
604
+ {
605
+ "52": {
606
+ "title": "Audioldm: Text-to-audio generation with latent diffusion models.",
607
+ "author": "Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic, Wenwu Wang, and Mark D Plumbley.",
608
+ "venue": "arXiv preprint arXiv:2301.12503, 2023.",
609
+ "url": null
610
+ }
611
+ },
612
+ {
613
+ "53": {
614
+ "title": "Benchmarking representations for speech, music, and acoustic events.",
615
+ "author": "Moreno La Quatra, Alkis Koudounas, Lorenzo Vaiani, Elena Baralis, Luca Cagliero, Paolo Garza, and Sabato Marco Siniscalchi.",
616
+ "venue": "arXiv preprint arXiv:2405.00934, 2024.",
617
+ "url": null
618
+ }
619
+ }
620
+ ],
621
+ "url": "http://arxiv.org/html/2408.17175v3"
622
+ }