Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- 20240921/1706.04595v5.json +53 -0
- 20240921/2206.06420v5.json +0 -0
- 20240921/2212.05581v4.json +0 -0
- 20240921/2301.11721v2.json +509 -0
- 20240921/2303.02770v2.json +48 -0
- 20240921/2304.10392v2.json +0 -0
- 20240921/2305.14254v2.json +281 -0
- 20240921/2310.19902v2.json +201 -0
- 20240921/2311.02578v3.json +463 -0
- 20240921/2311.11208v2.json +0 -0
- 20240921/2311.15153v6.json +0 -0
- 20240921/2311.17404v2.json +127 -0
- 20240921/2401.08326v3.json +0 -0
- 20240921/2402.04648v2.json +0 -0
- 20240921/2402.12875v4.json +628 -0
- 20240921/2403.02615v2.json +0 -0
- 20240921/2403.02959v3.json +0 -0
- 20240921/2403.07483v2.json +131 -0
- 20240921/2403.08214v3.json +0 -0
- 20240921/2403.10081v3.json +0 -0
- 20240921/2403.11693v3.json +363 -0
- 20240921/2403.17765v3.json +257 -0
- 20240921/2404.02180v4.json +0 -0
- 20240921/2404.04838v2.json +0 -0
- 20240921/2404.08368v3.json +0 -0
- 20240921/2405.17520v4.json +0 -0
- 20240921/2406.03822v2.json +0 -0
- 20240921/2406.05766v2.json +592 -0
- 20240921/2406.06799v2.json +61 -0
- 20240921/2406.11802v3.json +0 -0
- 20240921/2406.16272v2.json +495 -0
- 20240921/2407.04440v2.json +230 -0
- 20240921/2407.08742v4.json +453 -0
- 20240921/2407.18957v4.json +0 -0
- 20240921/2407.18970v3.json +188 -0
- 20240921/2408.11926v2.json +0 -0
- 20240921/2408.13140v3.json +491 -0
- 20240921/2408.15020v2.json +0 -0
- 20240921/2409.06554v2.json +332 -0
- 20240921/2409.07743v2.json +153 -0
- 20240921/2409.09467v2.json +0 -0
- 20240921/2409.09539v2.json +176 -0
- 20240921/2409.10925v2.json +134 -0
- 20240921/2409.13952v1.json +0 -0
- 20240921/2409.13953v1.json +501 -0
- 20240921/2409.13972v1.json +416 -0
- 20240921/2409.13975v1.json +163 -0
- 20240921/2409.13980v1.json +0 -0
- 20240921/2409.13982v1.json +300 -0
- 20240921/2409.13984v1.json +123 -0
20240921/1706.04595v5.json
ADDED
|
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Security Camera Movie and ERP Data Matching System to Prevent Theft",
|
| 3 |
+
"abstract": "In this paper, we propose a SaaS service which prevents shoplifting using image analysis and ERP. In Japan, total damage of shoplifting reaches 450 billion yen. Based on cloud and data analysis technology, we propose a shoplifting prevention service with image analysis of security camera and ERP data check for small shops. We evaluated movie analysis.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Recently, cloud technology such as [1 ###reference_b1###][2 ###reference_b2###][3 ###reference_b3###], service coordination technology such as [4 ###reference_b4###][5 ###reference_b5###] and data analysis technology have been progressed. Big data analysis using Apache Spark or Hadoop with MapReduce achieves various analysis services.\nOn the other hand, total damage of shoplifting reaches 450 billion yen per year in Japan. To prevent shoplifting, stores adopt countermeasures such as increasing monitoring staffs in stores, checking security camera movie by human eyes or installing EAS (Electronic Article Surveillance) which alerts shoplifting at the gate of stores. However, these countermeasures need additional staff expense cost, initial cost of EAS or other systems. Thus, small shops cannot adopt them.\nBased on these backgrounds, this paper targets a low cost shoplifting prevention SaaS service for small shops using cloud technology and data analysis technology. In our proposal, machine learning framework Jubatus[6 ###reference_b6###] on a small computer deployed in a shop analyzes security cameras movie, detects anomaly behavior and notifies to a cloud. Then, a shoplifting prevention application on a cloud checks product stock using item DB of ERP and notifies smart phones of shop staffs by mails when a possibility of shoplifting is high."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Existing Technologies and Problems",
|
| 15 |
+
"text": "Saburo-kun Ace[7 ###reference_b7###] is a shoplifting prevention system using security camera movie. Saburo-kun Ace detects a shoplifting from security camera movie when customers\u2019 actions match pre-defined 50 patterns of suspicious behaviors, and notifies it to staffs of shops. Shop staffs question or say something to the suspicious customer. This can reduce or prevent shopliftings. However, Saburo-kun Ace has some problems such as initial cost is high because shops need to deploy PC and movie analysis software, new shoplifting behaviors cannot be detected except for pre-defined suspicious behavior rules, actual operation may be hard because precision ratio of detection is not 100% and shop staffs often need to question customers.\nExisting technologies have two problems. The first is some technologies only can detect shoplifting based on pre-defined behavior rules. The second is accuracy of camera movie analysis is not sufficiently high so that actual operation may be difficult for staffs to question customers at unnecessary timing. Therefore, we target shoplifting prevention SaaS for small shops which detects shoplifting behavior including non-defined behavior at high accuracy and notifies shoplifting."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Proposal of Shoplifting Prevention Service Using Image Analysis and ERP Check",
|
| 21 |
+
"text": "To detect shoplifting of undefined behavior, we use online machine learning technology and detect suspicious actions comparing normal operation data. To enhance accuracy of shoplifting detection, we check not only camera movie analysis results but also product item DB managed on a cloud.\nFigure 1 shows system image of proposed service. In our system, shop site which has security cameras and cloud site which manages product item data are connected by a network. Using Fig.1, we explain processing steps of proposed service.\n###figure_1### Step 1: In parallel with shoplifting detection using camera movie, a sales management terminal sends sales information to a product management application on a cloud via a network. A product management application is SaaS which provides business application of ERP, and information of sales and product item stock is stored in item DB. Product item stock information is reflected to item DB in accordance with sales.\nStep 2: Stream data of security camera movie is sent to a small computer in a shop. A small computer is a computer which has a certain degree of computation power, memory size and communication capability. For example, Rasbpberry Pi can be used for this to analyze images.\nStep 3: A small computer cuts off each image from movie and extracts feature values from the image data. To extract feature values, libraries of dlib, OpenCV can be used.\nStep 4: A small computer detects customer\u2019s suspicious behavior from feature values. To analyze stream data of feature values, we use online machine learning Jubatus[6 ###reference_b6###]. Jubatus can detect not only shoplifting behavior based on pre-defined rules but also suspicious behavior based on machine learning.\nStep 5: When Jubatus detects a shoplifting suspicion such as anomaly score is high, image data and related data are sent to a shoplifting prevention application on a cloud.\nStep 6: A shoplifting prevention application checks product item stock data in item DB because image analysis accuracy is not sufficient. If there is a shoplifting, there is inconsistency between stock in item DB and actual stock in a shelf. Actual stock in a shelf also can be detected by security camera image.\nStep 7: A shoplifting prevention application notifies an alert with suspected customer image to smart phones of shop staffs when item DB check leads a high possibility of shoplifting."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV Confirmation of movie analysis by Jubatus",
|
| 27 |
+
"text": "We confirmed a precision ratio of security camera movie analysis by Jubatus stream processing. To estimate shoplifting actions, firstly we checked to judge users\u2019 posture. We implemented Jubatus plug-in which extracts feature values from an image and Python client which judges users\u2019 posture from one image of security camera by Jubatus.\nTo extract feature values, we used dlib library which extracts 68 coordinate points of eyes, nose, mouth, shape of the face from face images. From 68 coordinate points, we separate to X and Y axis and we obtain 136 feature values. For obtained feature values, we normalize relative coordinate in each face image by deducting face image position and dividing face size. Normalized data is classified by Jubatus classification functions of Linear Classifier and kNN Classifier. We adopt good precision one from two results. (See, Fig.2)\nWe trained and judged postures by Jubatus for 1,103 images and verified precision ratio. Precision ratio of k-cross validation was 72% when k was 10. This test was simple and there remained some non-tuning points. Through this verification test, we confirmed that we could judge customers\u2019 posture in a certain degree of precision ratio from security camera movie. It was also said because precision ratio of security camera analysis was not 100% and some tunings were needed for each shop environment, we needed to enhance accuracy of shoplifting detection by checking item DB of ERP.\nMachine learning models and other configurations of Jubatus are distributed from a cloud shoplifting prevention application to small computers. When we update configurationss, we will use cloud batch updating methods such as [8 ###reference_b8###] or server coordinating methods such as [9 ###reference_b9###][10 ###reference_b10###]. And when we conduct regression tests after configuration updates, we will use automatic verification methods such as [11 ###reference_b11###].\n###figure_2###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Conclusion",
|
| 33 |
+
"text": "We proposed a low cost shoplifting prevention service for small retail shops. In our proposal, Jubatus on small computers deployed in shop sites analyzed security camera movie, detected anomaly behaviors of customers and notified to a cloud, and a shoplifting prevention application on a cloud checked product item DB in ERP and notified shop staffs by emails when a possibility of shoplifting was high enough."
|
| 34 |
+
}
|
| 35 |
+
],
|
| 36 |
+
"appendix": [],
|
| 37 |
+
"tables": {},
|
| 38 |
+
"image_paths": {
|
| 39 |
+
"1": {
|
| 40 |
+
"figure_path": "1706.04595v5_figure_1.png",
|
| 41 |
+
"caption": "Figure 1: Proposed system image and processing steps",
|
| 42 |
+
"url": "http://arxiv.org/html/1706.04595v5/x1.png"
|
| 43 |
+
},
|
| 44 |
+
"2": {
|
| 45 |
+
"figure_path": "1706.04595v5_figure_2.png",
|
| 46 |
+
"caption": "Figure 2: Test outline of security camera movie analysis by Jubatus",
|
| 47 |
+
"url": "http://arxiv.org/html/1706.04595v5/x2.png"
|
| 48 |
+
}
|
| 49 |
+
},
|
| 50 |
+
"validation": true,
|
| 51 |
+
"references": [],
|
| 52 |
+
"url": "http://arxiv.org/html/1706.04595v5"
|
| 53 |
+
}
|
20240921/2206.06420v5.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2212.05581v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2301.11721v2.json
ADDED
|
@@ -0,0 +1,509 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Single-Trajectory Distributionally Robust Reinforcement Learning",
|
| 3 |
+
"abstract": "To mitigate the limitation that the classical reinforcement learning (RL) framework heavily relies on identical training and test environments, Distributionally Robust RL (DRRL) has been proposed to enhance performance across a range of environments, possibly including unknown test environments.\nAs a price for robustness gain, DRRL involves optimizing over a set of distributions, which is inherently more challenging than optimizing over a fixed distribution in the non-robust case.\nExisting DRRL algorithms are either model-based or fail to learn from a single sample trajectory.\nIn this paper, we design a first fully model-free DRRL algorithm, called distributionally robust Q-learning with single trajectory (DRQ).\nWe delicately design a multi-timescale framework to fully utilize each incrementally arriving sample and directly learn the optimal distributionally robust policy without modeling the environment, thus the algorithm can be trained along a single trajectory in a model-free fashion.\nDespite the algorithm\u2019s complexity, we provide asymptotic convergence guarantees by generalizing classical stochastic approximation tools.\nComprehensive experimental results demonstrate the superior robustness and sample complexity of our proposed algorithm, compared to non-robust methods and other robust RL algorithms.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Reinforcement Learning (RL) is a machine learning paradigm for studying sequential decision problems.\nDespite considerable progress in recent years (Silver et al., 2016 ###reference_b29###; Mnih et al., 2015 ###reference_b21###; Vinyals et al., 2019 ###reference_b32###),\nRL algorithms often encounter a discrepancy between training and test environments. This discrepancy is widespread since test environments may be too complex to be perfectly represented in training, or the test environments may inherently shift from the training ones, especially in certain application scenarios, such as financial markets and robotic control.\nOverlooking the mismatch could impede the application of RL algorithms in real-world settings, given the known sensitivity of the optimal policy of the Markov Decision Process (MDP) to the model (Mannor et al., 2004 ###reference_b20###; Iyengar, 2005 ###reference_b15###).\nTo address this concern,\nDistributionally Robust RL (DRRL) (Zhou et al., 2021 ###reference_b38###; Yang et al., 2022 ###reference_b37###; Shi & Chi, ###reference_b28###; Panaganti & Kalathil, 2022 ###reference_b24###; Panaganti et al., 2022 ###reference_b25###; Ma et al., 2022 ###reference_b19###; Yang, 2018 ###reference_b36###; Abdullah et al., 2019 ###reference_b2###; Neufeld & Sester, 2022 ###reference_b22###) formulates the decision problem under the assumption that the test environment varies but remains close to the training environment.\nThe objective is to design algorithms optimizing the worst-case expected return over an ambiguity set encompassing all possible test distributions.\nEvaluating a DRRL policy necessitates deeper insight into the transition dynamics than evaluating a non-robust one, as it entails searching for the worst-case performance across all distributions within the ambiguity set.\nTherefore, most prior solutions are model-based, require the maintenance of an estimator for the entire transition model and the ambiguity set.\nSuch requirements may render these algorithms less practical in scenarios with large state-action spaces or where adequate modeling of the real environment is unfeasible.\nPrompted by this issue, we study a fully model-free DRRL algorithm in this paper, which learns the optimal DR policy without explicit environmental modeling.\nThe algorithm\u2019s distinctive feature is its capacity to learn from a single sample trajectory, representing the least demanding requirement for data collection.\nThis feature results from our innovative algorithmic framework, comprising incrementally updated estimators and a delicate approximation scheme.\nWhile most model-free non-robust RL algorithms support training in this setting\u2014contributing to their widespread use\u2014no existing work can effectively address the DRRL problem in this way.\nThe challenge arises from the fact that approximating a DR policy by learning from a single trajectory suffers from restricted control over state-action pairs and limited samples, i.e., only one sample at a time.\nAs we will demonstrate, a simple plug-in estimator using one sample, which is unbiased in the non-robust -learning algorithm, fails to approximate any robust value accurately.\nThe complexity of this task is further affirmed by the sole attempt to develop a model-free DRRL algorithm in (Liu et al., 2022 ###reference_b18###).\nIt relies on a restricted simulator assumption, enabling the algorithm to access an arbitrary number of samples from any state-action pair, thereby amassing sufficient system dynamics information before addressing the DRRL problem.\nRelaxing the dependence on a simulator and developing a fully model-free algorithm capable of learning from a single trajectory necessitates a delicate one-sample estimator for the DR value, carefully integrated into an algorithmic framework to eradicate bias from insufficient samples and ensure convergence to the optimal policy.\nMoreover, current solutions heavily depend on the specific divergence chosen to construct the ambiguity set and fail to bridge different divergences, underscoring the practical importance of divergence selection.\nThus a nature question arises:\nIs it possible to develop a model-free DRRL framework that can learn the optimal DR policy across different divergences using only a single sample trajectory for learning?"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "Our Contributions",
|
| 15 |
+
"text": "In this paper, we provide a positive solution to the aforementioned question by making the following contributions:\nWe introduce a pioneering approach to construct the ambiguity set using the Cressie-Read family of -divergence. By leveraging the strong duality form of the corresponding distributionally robust reinforcement learning (DRRL) problem, we reformulate it, allowing for the learning of the optimal DR policies using misspecified MDP samples. This formulation effortlessly covers widely used divergences such as the Kullback-Leibler (KL) and divergence.\nTo address the additional nonlinearity that arises from the DR Bellman equation, which is absent in its non-robust counterpart, we develop a novel multi-timescale stochastic approximation scheme. This scheme carefully exploits the structure of the DR Bellman operator. The update of the table occurs in the slowest loop, while the other two loops are delicately designed to mitigate the bias introduced by the plug-in estimator due to the nonlinearity.\nWe instantiate our framework into a DR variant of the -learning algorithm, called distributionally robust -learning with single trajectory (DRQ). This algorithm solves discount Markov Decision Processes (MDPs) in a fully online and incremental manner. We prove the asymptotic convergence of our proposed algorithm by extending the classical two-timescale stochastic approximation framework, which may be of independent interest.\nWe conduct extensive experiments to showcase the robustness and sample efficiency of the policy learned by our proposed DR -learning algorithm.\nWe also create a deep learning version of our algorithm and compare its performance to representative online and offline (robust) reinforcement learning benchmarks on classical control tasks."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "Related Work",
|
| 21 |
+
"text": "Robust MDPs and RL:\nThe framework of robust MDPs has been studied in several works such as Nilim & El Ghaoui (2005 ###reference_b23###); Iyengar (2005 ###reference_b15###); Wiesemann et al. (2013 ###reference_b34###); Lim et al. (2013 ###reference_b17###); Ho et al. (2021 ###reference_b14###); Goyal & Grand-Clement (2022 ###reference_b13###).\nThese works discuss the computational issues using dynamic programming with different choices of MDP formulation, as well as the choice of ambiguity set, when the transition model is known.\nRobust Reinforcement Learning (RL) (Roy et al., 2017 ###reference_b26###; Badrinath & Kalathil, 2021 ###reference_b3###; Wang & Zou, 2021 ###reference_b33###) relaxes the requirement of accessing to the transition model by simultaneously approximating to the ambiguity set as well as the optimal robust policy, using only the samples from the misspecified MDP.\nOnline Robust RL:\nExisting online robust RL algorithms including Wang & Zou (2021 ###reference_b33###); Badrinath & Kalathil (2021 ###reference_b3###); Roy et al. (2017 ###reference_b26###), highly relies on the choice of the -contamination model and could suffer over-conservatism.\nThis ambiguity set maintains linearity in their corresponding Bellman operator and thus inherits most of the desirable benefits from its non-robust counterpart.\nInstead, common distributionally robust ambiguity sets, such as KL or divergence ball, suffer from extra nonlinearity when trying to learn along a single-trajectory data, which serves as the foundamental challenge in this paper.\nDistributionally Robust RL:\nTo tackle the over-conservatism aroused by probability-agnostic -contamination ambiguity set in the aforementioned robust RL, DRRL is proposed by constructing the ambiguity set with probability-aware distance (Zhou et al., 2021 ###reference_b38###; Yang et al., 2022 ###reference_b37###; Shi & Chi, ###reference_b28###; Panaganti & Kalathil, 2022 ###reference_b24###; Panaganti et al., 2022 ###reference_b25###; Ma et al., 2022 ###reference_b19###), including KL and divergence.\nAs far as we know, most of the existing DRRL algorithms fall into the model-based fashion, which first estimate the whole transition model and then construct the ambiguity set around the model.\nThe DR value and the corresponding policy are then computed based upon them.\nTheir main focus is to understand the sample complexity of the DRRL problem in the offline RL regime, leaving the more prevalent single-trajectory setting largely unexplored."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Preliminary",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.1",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Discounted MDPs",
|
| 33 |
+
"text": "Consider an infinite-horizon MDP where and are finite state and action spaces with cardinality and .\n is the state transition probability measure.\nHere is the set of probability measures over .\n is the reward function and is the discount factor.\nWe assume that is deterministic and bounded in .\nA stationary policy maps, for each state to a probability distribution over the action set and induce a random trajectory , with , and for .\nTo derive the policy corresponding to the value function, we define the optimal state-action function as the expected cumulative discounted rewards under the optimal policy,\n\nThe optimal state-action function is also the fixed point of the Bellman optimality equation,"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.2",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "-learning",
|
| 39 |
+
"text": "Our model-free algorithmic design relies on the -learning template, originally designed to solve the non-robust Bellman optimality equation (Equation 1 ###reference_###). -learning is a model-free reinforcement learning algorithm that uses a single sample trajectory to update the estimator for the function incrementally. Suppose at time , we draw a sample from the environment.\nThen, the algorithm updates the estimated -function following:\nHere, is a learning rate.\nThe algorithm updates the estimated function by constructing a unbiased estimator for the true value, i.e., using one sample."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.3",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "Distributionally Robust MDPs",
|
| 45 |
+
"text": "DRRL learns an optimal policy that is robust to unknown environmental changes, where the transition model and reward function may differ in the test environment.\nTo focus on the perturbation of the transition model, we assume no pertubation to the reward function.\nOur approach adopts the notion of distributional robustness, where the true transition model is unknown but lies within an ambiguity set that contains all transition models that are close to the training environment under some probability distance .\nTo ensure computational feasibility, we construct the ambiguity set in the -rectangular manner, where for each , we define the ambiguity set as,\nWe then build the ambiguity set for the whole transition model as the Cartesian product of every -ambiguity set, i.e., .\nGiven , we define the optimal DR state-action function as the value function of the best policy to maximize the worst-case return over the ambiguity set,\nUnder the -rectangular assumption, the Bellman optimality equation has been established by Iyengar (2005 ###reference_b15###); Xu & Mannor (2010 ###reference_b35###),\nFor notation simplicity, we would ignore the superscript ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Distributonally Robust -learning with Single Trajectory",
|
| 51 |
+
"text": "This section presents a general model-free framework for DRRL.\nWe begin by instantiating the distance as Cressie-Read family of -divergence (Cressie & Read, 1984 ###reference_b9###), which is designed to recover previous common choices such as the and KL divergence.\nWe then discuss the challenges and previous solutions in solving the corresponding DRRL problem, as described in Section 3.2 ###reference_###. Finally, we present the design idea of our three-timescale framework and establish the corresponding convergence guarantee."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.1",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Divergence Families",
|
| 57 |
+
"text": "Previous work on DRRL has mainly focused on one or several divergences, such as KL, , and total variation (TV) divergences.\nIn contrast, we provide a unified framework that applies to a family of divergences known as the Cressie-Read family of -divergences.\nThis family is parameterized by , and for any chosen , the Cressie-Read family of -divergences is defined as\nwith .\nBased on this family, we instantiate our ambiguity set in Equation 2 ###reference_### as for some radius .\nThe Cressie-Read family of -divergence includes -divergence () and KL divergence ().\nOne key challenge in developing DRRL algorithms using the formulation in Equation 3 ###reference_### is that the expectation is taken over the ambiguity set , which is computationally intensive even with the access to the center model .\nSince we only have access to samples generated from the possibly misspecific model , estimating the expectation with respect to other models is even more challenging. While importance sampling-based techniques can achieve this, the cost of high variance is still undesirable.\nTo solve this issue, we rely on the dual reformulation of Equation 3 ###reference_###:\nFor any random variable , define with and .\nThen\nHere . Equation 4 ###reference_### shows that protecting against the distribution shift is equivalent to optimizing the tail-performance of a model, as only the value below the dual variable are taken into account.\nAnother key insight from the reformulation is that as the growth of for large becomes steeper for larger , the -divergence ball shrinks and the risk measure becomes less conservative.\nThis bridges the gap between difference divergences, whereas previous literature, including Yang et al. (2022 ###reference_b37###) and Zhou et al. (2021 ###reference_b38###), treats different divergences as separate.\nBy applying the dual reformulation, we can rewrite the Cressie-Read Bellman operator in Equation 3 ###reference_### as"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Bias in Plug-in Estimator in Single Trajectory Setting",
|
| 63 |
+
"text": "In this subsection, we aim to solve Equation 5 ###reference_### using single-trajectory data, which has not been addressed by previous DRRL literature.\nAs we can only observe one newly arrival sample each time, to design a online model-free DRRL algorithm, we need to approximate the expectation in Equation 5 ###reference_### using that single sample properly.\nAs mentioned in Section 2.2 ###reference_###, the design of the -learning algorithm relies on an one-sample unbiased estimator of the true Bellman operator.\nHowever, this convenience vanishes in the DR Bellman operator.\nTo illustrate this, consider plugging only one sample into the Cressie-Read Bellman operator Equation 5 ###reference_###:\nThis reduces to the non-robust Bellman operator and is obviously not an unbiased estimator for . This example reveals the inherently more challenging nature of the online DRRL problem. Whereas non-robust RL only needs to improve the expectation of the cumulative return, improving the worst-case return requires more information about the system dynamics, which seems hopeless to be obtained from only one sample and sharply contrasts with our target.\nEven with the help of batch samples, deriving an appropriate estimator for the DR Bellman operator is still nontrivial.\nConsider a standard approach to construct estimators, sample average approximation (SAA):\ngiven a batch of sample size starting from a fix state-action pair , i.e., , the SAA empirical Bellman operator is defined as:\nHere, is the empirical Cressie-Read functional defined as\nAs pointed out by Liu et al. (2022 ###reference_b18###), the SAA estimator is biased, prompting the introduction of the multilevel Monte-Carlo method (Blanchet & Glynn, 2015 ###reference_b4###). Specifically, it first obtains samples from the distribution , and then uses the simulator to draw samples . The samples are further decomposed into two parts: consists of the first samples, while contains the remaining samples. Finally, the DR term in Equation 5 ###reference_### is approximated by solving three optimization problems:\nHowever, this multilevel Monte-Carlo solution requires a large batch of samples for the same state-action pair before the next update, resulting in unbounded memory costs/computational time that are not practical.\nFurthermore, it is prohibited in the single-trajectory setting, where each step only one sample can be observed.\nOur experimental results show that simply approximating the Bellman operator with simulation data, without exploiting its structure, suffers from low data efficiency."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.3",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "Three-timescale Framework",
|
| 69 |
+
"text": "The -learning is solving the nonrobust Bellman operator\u2019s fixed point in a stochastic approximation manner.\nA salient feature in the DR Bellman operator, compared with its nonrobust counterpart, is a bi-level optimization nature, i.e., jointly solving the dual parameter and the fixed point of the Bellman optimality equation.\nWe revisit the stochastic approximation view of the -learning and develop a three-timescale framework, by a faster running estimate of the optimal dual parameter, and a slower update of the table.\nTo solve Equation 5 ###reference_### using a stochastic approximation template, we iteratively update the variables and table as follows: for the -th iteration after observing a new transition sample and some learning rates ,\nAs the update of and relies on each other, we keep the learning speeds of and , i.e., and , different to stabilize the training process.\nAdditionally, due to the -rectangular assumption, is independent across different -pairs, while the table depends on each other.\nThe independent structure for allows it to be estimated more easily; so we approximate it in a faster loop, while for we update it in a slower loop."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.4",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "Algorithmic Design",
|
| 75 |
+
"text": "In this subsection, we further instantiate the three-timescale framework to the Cressie-Read family of -divergences.\nFirst, we compute the gradient of in Equation 5 ###reference_### with respect to .\nwhere\nDue to the nonlinearity in Equation 6 ###reference_###, the plug-in gradient estimator is in fact biased.\nThe bias arises as for a random variable , for in .\nTo address this issue, we introduce another even faster timescale to estimate and ,\nIn the medium timescale, we approximate by incrementally update the dual variable using the stochastic gradient descent method, where the true gradient computed in Equation 6 ###reference_### is approximated by:\nFinally, we update the DR function in the slowest timescale using Equation 12 ###reference_###,\nwhere is the empirical version of Equation 5 ###reference_### in the -th iteration:\nHere and are learning rates for three timescales at time , which will be specified later.\nWe summarize the ingredients into our DR -learning (DRQ) algorithm (Algorithm 1 ###reference_###), and prove the almost surely (a.s.) convergence of the algorithm as Theorem 3.3 ###reference_theorem3###.\nThe proof is deferred in Appendix C ###reference_###.\nThe estimators at the n-th step in Algorithm 1 ###reference_###, , converge to a.s. as , where and are the fixed-point of the equation , and and are the corresponding quantity under and .\nThe proof establishes that, by appropriately selecting stepsizes to prioritize frequent updates of and , followed by , and with updated at the slowest rate, the solution path of closely tracks a system of three-dimensional ordinary differential equations (ODEs) considering martingale noise.\nOur approach is to generalize the classic machinery of two-timescale stochastic approximation (Borkar, 2009 ###reference_b5###) to a three-timescale framework, and use it to analyze our proposed algorithm.\nSee Appendix B ###reference_### for the detailed proof."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Experiments",
|
| 81 |
+
"text": "We demonstrate the robustness and sample complexity of our DRQ algorithm in the Cliffwalking environment (Del\u00e9tang et al., 2021 ###reference_b10###) and American put option environment (deferred in Appendix A ###reference_###).\nThese environments provide a focused perspective on the policy and enable a clear understanding of the key parameters effects.\nWe develop a deep learning version of DRQ and compare it with practical online and offline (robust) RL algorithms in classical control tasks, LunarLander and CartPole."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.1",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "Convergence and Sample Complexity",
|
| 87 |
+
"text": "Before we begin, let us outline the key findings and messages conveyed in this subsection:\n(1) Our ambiguity set design provides substantial robustness, as demonstrated through comparisons with non-robust -learning and -contamination ambiguity sets (Wang & Zou, 2021 ###reference_b33###).\n(2) Our DRQ algorithm exhibits desirable sample complexity, significantly outperforming the multi-level Monte Carlo based DRQ algorithm proposed by Liu et al. (2022 ###reference_b18###) and comparable to the sample complexity of the model-based DRRL algorithm by Panaganti & Kalathil (2022 ###reference_b24###).\n###figure_1### ###figure_2### ###figure_3### ###figure_4### Experiment Setup:\nThe Cliffwalking task is commonly used in risk-sensitive RL research (Del\u00e9tang et al., 2021 ###reference_b10###). Compared to the Frozen Lake environment used by Panaganti & Kalathil (2022 ###reference_b24###), Cliffwalking offers a more intuitive visualization of robust policies (see Figure 1 ###reference_###). The task involves a robot navigating from an initial state of to a goal state of . At each step, the robot is affected by wind, which causes it to move in a random direction with probability . Reaching the goal state earns a reward of , while encountering a wave in the water region results in a penalty of . We train the agent in the nominal environment with for 3 million steps per run, using an -greedy exploration strategy with . We evaluate its performance in perturbed environments, varying the choices of and to demonstrate different levels of robustness.\nWe set the stepsize parameters according to Assumption B.1 ###reference_theorem1###: , , and , where the discount factor is .\n###figure_5### ###figure_6### ###figure_7### ###figure_8### Robustness:\nTo evaluate the robustness of the learned policies, we compare their cumulative returns in perturbed environments with over 100 episodes per setting.\nWe visulize the decision at each status in Figure 1 ###reference_### with different robustness level .\nIn particular, the more robust policy tends to avoid falling into the water, thus arrives to the goal state with a longer path by keeping going up before going right.\nFigure 2a ###reference_sf1### shows the return distribution for each policy. Figure 2b ###reference_sf2### displays the time taken for the policies to reach the goal, and the more robust policy tends to spend more time, which quantitatively supports our observations in Figure 1 ###reference_###. Interestingly, we find that the robust policies outperform the nonrobust one even in the nominal environment.\nFor the different \u2019s, is the best within a relatively wide range (), while is preferred in the environment of extreme pertubation ().\nThis suggests that DRRL provides a elegant trade-off for different robustness preferences.\nWe also compare our model-free DRRL algorithm with the robust RL algorithm presented in Wang & Zou (2021 ###reference_b33###), which also supports training using a single trajectory.\nThe algorithm in Wang & Zou (2021 ###reference_b33###) uses an -contamination ambiguity set.\nWe select the best value of from to and other detailed descriptions in Appendix A ###reference_###. In most cases, the -contamination based algorithm performs very similarly to the non-robust benchmark, and even performs worse in some cases (i.e., and ), due to its excessive conservatism.\nAs we mentioned in Section 3.1 ###reference_###, larger would render the the risk measure less conservative and thus less sensitive to the change in the ball radius , which is empirically confirmed by Figure 2c ###reference_sf3###.\n###figure_9### Sample Complexity:\nThe training curves in Figure 3 ###reference_### depict the estimated value (solid line) and the optimal robust value (dashed line) for the initial state .\nThe results indicate that the estimated value converges quickly to the optimal value, regardless of the values of and . Importantly, our DRQ algorithm achieves a similar convergence rate to the non-robust baseline (represented by the black line).\nWe further compare our algorithm with two robust baselines: the DRQ algorithm with a weak simulator proposed by Liu et al. (2022 ###reference_b18###) (referred to as Liu\u2019s), and the model-based algorithm introduced by Panaganti & Kalathil (2022 ###reference_b24###) (referred to as Model) in Figure 4 ###reference_###.\nTo ensure a fair comparison, we set the same learning rate, , for our DRQ algorithm and the -table update loop of the Liu\u2019s algorithm, as per their recommended choices.\nOur algorithm converges to the true DR value at a similar rate as the model-based algorithm, while the Liu\u2019s algorithm exhibits substantial deviation from the true value and converges relatively slowly. Our algorithm\u2019s superior sample efficiency is attributed to the utilization of first-order information to approximate optimal dual variables, whereas Liu\u2019s relies on a large amount of simulation data for an unbiased estimator.\n###figure_10###"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.2",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "Practical Implementation",
|
| 93 |
+
"text": "We validate the practicality of our DRQ framework by implementing a practical version, called the Deep Distributionally Robust -learning (DDRQ) algorithm, based on the DQN algorithm (Mnih et al., 2015 ###reference_b21###). We apply this algorithm to two classical control tasks from the OpenAI Gym (Brockman et al., 2016 ###reference_b7###): CartPole and LunarLander.\nOur practical algorithm, denoted as Algorithm 2 ###reference_###, is a variant of Algorithm 1 ###reference_###.\nSpecifically, we adopt the Deep Q-Network (DQN) architecture (Mnih et al., 2015 ###reference_b21###) and employ two sets of neural networks as functional approximators. One set, and , serves as approximators for the function, while the other set, and , approximates the distributionally robust dual variable . To enhance training stability, we introduce a target network, , for the fast network and for the fast dual variable network .\nDue to the approximation error introduced by neural networks and to further improve sample efficiency, our practical DDRQ algorithm adopts a two-timescale update approach.\nIn this approach, our network aims to minimize the Bellman error, while the dual variable network strives to maximize the DR value defined in Equation 5 ###reference_###.\nIt\u2019s important to note that the two-timescale update approach could introduce bias in the convergence of the dual variable, and thus the dual variable may not the optimal dual variable for the primal problem.\nGiven the primal-dual structure of this DR problem, this could render an even lower target value for the network to learn.\nThis approach can be understood as a robust update strategy for our original DRRL problem, share some spirits to the optimization techniques used in other algorithms like Variational Autoencoders (VAE)(Kingma & Welling, 2013 ###reference_b16###), Proximal Policy Optimization (PPO)(Schulman et al., 2017 ###reference_b27###), and Maximum a Posteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018 ###reference_b1###). Additional experimental details can be found in Appendix A.3 ###reference_###.\nTo assess the effectiveness of our DDRQ algorithm, we compare it against the RFQI algorithm (Panaganti et al., 2022 ###reference_b25###), the soft-robust RL algorithm (Derman et al., 2018 ###reference_b11###), and the non-robust DQN and FQI algorithms. This comparison encompasses representative practical (robust) reinforcement learning algorithms for both online and offline datasets.\nTo evaluate the robustness of the learned policies, we introduce action and physical environment perturbations. For action perturbation, we simulate the perturbations by varying the probability of randomly selecting an action for both CartPole and LunarLander tasks. We test with for CartPole and for LunarLander.\nRegarding physical environment perturbation in LunarLander, we decrease the power of all the main engine and side engines by the same proportions, ranging from 0 to . For CartPole, we reduce the \u201dforce mag\u201d parameter from to .\nWe set the same ambiguity set radius for both our DDRQ and RFQI algorithm for fair comparisons.\nFigure 5 ###reference_### illustrates how our DDRQ algorithm successfully learns robust policies across all tested tasks, achieving comparable performance to other robust counterparts such as RFQI and SR-DQN.\nConversely, the non-robust DQN and FQI algorithms fail to learn robust policies and deteriorate significantly even under slight perturbations.\nIt is worth noting that RFQI does not perform well in the LunarLander environment, despite using the official code provided by the authors. This outcome could be attributed to the restriction to their TV distance in constructing the ambiguity set, while our Creass-Read ambiguity set can be flexibily chosen to well adopted to the environment nature.\nAdditionally, the soft-robust RL algorithm requires generating data based on multiple models within the ambiguity set. This process can be excessively time-consuming, particularly in large-scale applications."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Conclusion",
|
| 99 |
+
"text": "In this paper, we introduce our DRQ algorithm, a fully model-free DRRL algorithm trained on a single trajectory.\nBy leveraging the stochastic approximation framework, we effectively tackle the joint optimization problem involving the state-action function and the DR dual variable.\nThrough an extension of the classic two-timescale stochastic approximation framework, we establish the asymptotic convergence of our algorithm to the optimal DR policy. Our extensive experimentation showcases the convergence, sample efficiency, and robustness improvements achieved by our approach, surpassing non-robust methods and other robust RL algorithms.\nOur DDRQ algorithm further validates the practicality of our algorithmic framework."
|
| 100 |
+
}
|
| 101 |
+
],
|
| 102 |
+
"appendix": [
|
| 103 |
+
{
|
| 104 |
+
"section_id": "Appendix x1",
|
| 105 |
+
"parent_section_id": null,
|
| 106 |
+
"section_name": "Appendix",
|
| 107 |
+
"text": "In the subsequent sections, we delve into the experimental specifics and provide the technical proofs that were not included in the primary content.\nIn Section A ###reference_###, we commence by showcasing an additional experiment on the American call option. This aligns with the convergence and sample complexity discussions from the main content. We then elucidate the intricacies of Liu\u2019s algorithm to facilitate a transparent comparison with our methodology. Lastly, we discuss the algorithmic intricacies of our DDRQ algorithm and provide details on the experiments that were previously omitted.\nIn Section B ###reference_###, to prove Theorem 3.3 ###reference_theorem3###, we begin by extending the two-timescale stochastic approximation framework to a three-timescale one. Following this, we adapt it to our algorithm, ensuring all requisite conditions are met."
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"section_id": "Appendix 1",
|
| 111 |
+
"parent_section_id": null,
|
| 112 |
+
"section_name": "Appendix A Additional Experiments Details",
|
| 113 |
+
"text": "In this section, we present additional experimental results from a simulated American put option problem (Cox et al., 1979 ###reference_b8###) that has been previously studied in robust RL literature (Zhou et al., 2021 ###reference_b38###; Tamar et al., 2014 ###reference_b30###).\nThe problem involves holding a put option in multiple stages, whose payoff depends on the price of a financial asset that follows a Bernoulli distribution.\nSpecifically, the next price at stage follows,\nwhere the and are the price up and down factors and is the probability that the price goes up. The initial price is uniformly sampled from , where is the strike price and in our simulation. The agent can take an action to exercise the option () or not exercise () at the time step . If exercising the option, the agent receives a reward and the state transits into an exit state.\nOtherwise, the price will fluctuate based on the above model and no reward will be assigned.\nMoreover we introduce a discount structure in this problem, i.e., the reward in the stage worths in stage as our algorithm is designed for discounted RL setting.\nIn our experiments, we set , , and . We limit the price in and discretize with the precision of 1 decimal place. Thus the state space size .\n###figure_11### We first demonstrate the robustness gain of our DR -learning algorithm by comparing with the non-robust -learning algorithm, and investigate the effect of different robustness levels by varying .\nEach agent is trained for steps with an -greedy exploration policy of and evaluated in perturbed environments.\nWe use the same learning rates for the three timescales in our DR -learning algorithm as in the Cliffwalking environment: , , and .\nFor the non-robust -learning we set the same learning rate as in our -update, i.e., .\nWe perturb the transition probability to the price up and down status , and evaluate each agent for episodes.\nFigure 6 ###reference_### reports the average return and one standard deviation level.\nThe non-robust -learning performs best when the price tends to decrease and the market gets more benefitial (), which benefits the return of holding an American put option.\nHowever, when the prices tend to increase and the market is riskier (), our DR -learning algorithm significantly outperforms the non-robust counterpart, demonstrating the robustness gain of our algorithm against worst-case scenarios.\n###figure_12### We present the learning curve of our DR -learning algorithm with different in Figure 7 ###reference_###.\nOur algorithm can accurately learn the DR value under different \u2019s and \u2019s within million steps.\nWe compare the sample efficiency of our algorithm with the DR -learning algorithm in Liu et al. (2022 ###reference_b18###) (referred to as Liu\u2019s) and the model-based algorithm in Panaganti & Kalathil (2022 ###reference_b24###) (referred to as Model).\nWe set a smaller learning rate for Liu\u2019s as .\nThe reason is setting the same learning rate for their algorithm would render a much slower convergence performance, which is not fair for comparisons.\nWe use the recommended choice for the sampling procedure in Liu algorithm.\nBoth DR -learning and Liu are trained for steps per run, while the model-based algorithm is trained for steps per run to ensure sufficient samples for convergence.\nAs shown in Figure 8 ###reference_###,\nthe model-based approach is the most sample-efficient, converging accurately to the optimal robust value with less than samples.\nOur DR -learning algorithm is slightly less efficient, using samples to converge.\nLiu algorithm is significantly less efficient, using samples to converge.\nNote that the model-based approach we compared here is to first obtain samples for each state-action pairs, and then conduct the learning procedure to learn the optimal robust value.\nIn particular, we need to specify the number of samples for each state-action pair .\nThen the total number of samples used is the sum of all these number, i.e., , whose computation manner is different from that in the model-free algorithms we used where each update requires one or a batch of new samples.\nTo ensure self-containment, we provide the pseudocode for our implemented Liu algorithm (Algorithm 3 ###reference_###) and the model-based algorithm (Algorithm 2 ###reference_###) below. These algorithms were not originally designed to solve the ambiguity set constructed by the Cressie-Read family of -divergences.\nIn this subsection, we provide the pseudo-code for the Liu algorithm, represented in Algorithm 2 ###reference_###. Our intention is to emphasize the differences in algorithmic design between their approach and ours.\nTheir algorithm, in particular, relies extensively on multi-level Monte Carlo, requiring the sampling of a batch of samples for each state-action pair. Once they estimate the Doubly Robust (DR) value for a specific state-action pair, the samples are promptly discarded and subsequently resampled from a simulator. To summarize, their algorithm exhibits significant distinctions from ours in terms of algorithmic design.\n###figure_13### In this section, we provide a comprehensive description of our Deep Distributionally Robust -learning (DDRQ) algorithm, as illustrated in Algorithm 2 ###reference_###, along with its experimental setup in the context of CaroPole and LunarLander.\nMost of the hyperparameters are set the same for both LunarLander and CartPole.\nWe choose Cressie-Read family parameter , which is indeed the ambiguity set and we set ambiguity set radius as .\nFor RFQI we also use the same for fair comparison. Our replay buffer size is set and the batch size for training is set .\nOur fast and network are update every 10 steps () and the target networks are updated every 500 steps (). The learning rate for network is and for network is .\nThe network and the network both employ a dual-layer structure, with each layer consisting of 120 dimensions.\nFor exploration scheme, we choose epsilon-greedy exploration with linearly decay epsilon with ending .\nThe remain parameters tuned for each environments are referred in Table 1 ###reference_###."
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"section_id": "Appendix 2",
|
| 117 |
+
"parent_section_id": null,
|
| 118 |
+
"section_name": "Appendix B Multiple Timescale Convergence",
|
| 119 |
+
"text": "We fix some notations that will be used in the following proof.\nFor a positive integer , denotes the set . denotes the cardinality of the set .\nWe adopt the standard asymptotic notations: for two non-negative sequences and , iff .\n is the simplex on a dimensional space, i.e., .\nFor any vector and any semi-positive matrix with , we denote .\n is Euclidean norm.\nIn this subsection, we outline the roadmap for establishing the a.s. convergence of the Algorithm 1 ###reference_###.\nFor ease of presentation, our analysis is given for the synchronous case, where every entry of the function is updated at each timestep. Extension to the asynchronous case, where only one state-action pair entry is updated at each timestep, follows Tsitsiklis (1994 ###reference_b31###).\nOur approach is to generalize the classic machinery of two-timescale stochastic approximation (Borkar, 2009 ###reference_b5###) to a three-timescale framework, and use it to analyze our proposed algorithm.\nWe rewrite the Algorithm 1 ###reference_### as\nHere, we use to represent the and jointly.\nTo echo with our algorithm, and are defined as,\nIn the update of (Equation 15 ###reference_###), and are defined as\nFinally in the update of (Equation 16 ###reference_###), and are defined as\nThe algorithm 1 ###reference_### approximates the dynamic described by the system of , and through samples along a single trajectory, with the resulting approximation error manifesting as martingale noise conditioned on some filtration and the error terms and .\nTo analyze the dynamic of algorithm 1 ###reference_###, we first obtain the continuous dynamic of , and using ordinary differential equations (ODEs) analysis.\nThe second step is to analyze the stochastic nature of the noise term and the error terms and , to ensure that they are negligible compared to the main trend of , , and , which is achieved by the following stepsizes,\nThe stepsizes satisfy\nThese stepsize schedules satisfy the standard conditions for stochastic approximation algorithms, ensuring that (1). the key quantities in gradient estimator update on the fastest timescale, (2). the dual variable for the DR problem, , update on the intermediate timescale; and (3). the table updates on the slowest timescale.\nExamples of such stepsize are and .\nNotably, the first two conditions in Condition B.1 ###reference_theorem1### ensure the martingale noise is negligible.\nThe different stepsizes for the three loops specificed by the third and fourth conditions ensures that and are sufficiently estimated with respect to the and , and these outer two loops are free from bias or noise in the stochastic approximation sense.\nUnder Condition B.1 ###reference_theorem1###, when analyzing the behavior of the , the and the can be viewed as quasi-static.\nTo study the behavior of the fastest loop, we analyze the following ODEs:\nand prove that ODEs (17 ###reference_###) a.s. converge to for proper and and some mapping .\nSimilarly, can be viewed as fixed when analyzing the behavior of , and the corresponding ODEs to understand its behavior are\nBy exploiting the dual form of the distributionally robust optimization problem, we can prove these ODEs converge to the set for some mapping and with is the set containing all the mapping from to .\nLastly, we examine the slowest timescale ODE given by\nand employ our analysis to establish the almost sure convergence of Algorithm 1 ###reference_### to the globally optimal pair .\nLet (resp. ) be nonnegative (resp. positive) sequences and scalars such that for all ,\nThen for ,\nFor continuous and scalars\nimplies\nConsider the stochastic approximation scheme given by\nwith the following Condition:\nis Lipschitz.\nThe sequence satisfies .\nis a martingale difference sequence with respect to the filtration , there exists such that a.s..\nThe functions satisfy as uniformly on compacts for some continuous function . In addition, the ODE\nhas the origin as its globally asymptotically stable equilibrium.\nWe then have\nUnder Condition B.4 ###reference_theorem4### to B.6 ###reference_theorem6###, we have\n a.s.\nSee Section 2.2 and 3.2 in Borkar (2009 ###reference_b5###) for the proof.\nAs the stability proofs in Section 3.2 of Borkar (2009 ###reference_b5###) are path-wise, we can apply this result to analyze multiple timescales dynamic.\nConsider the scheme\nwhere , , , are martingale difference sequences with respect to the -fields , and the form decreasing stepsize sequences.\nIt is instructive to compare the stochastic update algorithms from Equations 20 ###reference_### to 22 ###reference_### with the following o.d.e.,\nin the limit that and , .\nWe impose the following conditions, which are necessary for the a.s. convergence for each timescale itself and are commonly used in the literature of stochastic approximation algorithms, e.g., (Borkar, 2009 ###reference_b5###).\nand is -Lipschitz map for some and is bounded.\nFor and , is a martingale differeence sequence with respect to the increasing family of -fields .\nFurthermore, there exists some , such that for and ,\n, a.s..\nFor each and , has a globally asymptotically stable equilibrium , where is a -Lipschitz map for some .\nFor each , has a globally asymptotically stable equilibrium , where is a -Lipschitz map for some .\nhas a globally asymptotically stable equilibrium .\nConditions B.9 ###reference_theorem9###, B.10 ###reference_theorem10###, B.11 ###reference_theorem11### and B.12 ###reference_theorem12### are necessary for the a.s. convergence for each timescale itself.\nMoreover, Condition B.12 ###reference_theorem12### itself requires Conditions like B.9 ###reference_theorem9###, B.10 ###reference_theorem10###, B.11 ###reference_theorem11###, with an extra condition like Condition B.6 ###reference_theorem6###.\nInstead, we need to prove the boundedness for each timescale, thus the three timescales version is as follow\nThe ODE\nall have the origin as their globally asymptotically stable equilibrium for each and , where\nWe have the following results, which appears as a three timescales extension of Lemma 6.1 in Borkar (2009 ###reference_b5###) and serves as a auxiliary lemma for the our a.s. convergence.\nUnder the conditions B.9 ###reference_theorem9###, B.10 ###reference_theorem10###, B.11 ###reference_theorem11### and B.12 ###reference_theorem12###.\n a.s..\nRewrite Equations 21 ###reference_### and 22 ###reference_### as\nwhere , , , .\nNote that as .\nConsider them as the special case in the third extension in Section 2.2 in Borkar (2009 ###reference_b5###) and then we can conclude that converges to the internally chain transitive invariant sets of the o.d.e.,\nwhich implies that .\nRewrite Equation 22 ###reference_### again as\nwhere and .\nWe use the same extension again and can conclude that converges to the internally chain transitive invariant sets of the o.d.e.,\nThus .\n\u220e\nUnder the Condition B.9 ###reference_theorem9### to B.16 ###reference_theorem16###, .\nLet and for .\nDefine the piecewise linear continuous function where and for with any .\nLet .\nFor any , denote . Then for , we have\nWe further define as the trajectory of with .\nTaking the difference between Equation 23 ###reference_### and the Equation 24 ###reference_### we have\nWe analyze the I term. For notation simplicity we ignore the supsript .\nBy the Lipschitzness of the we have\nwhich implies\nBy Gronwall\u2019s inequality (Lemma B.3 ###reference_theorem3###), we have\nThus for all , we have\nFor any and ,\nwhere the last inequality is from the construction of .\nFinally we can conclude\nFor the III term, it converges to zero from the martingale convergence property.\nSubtracting equation 23 ###reference_### from 24 ###reference_### and take norms, we have\nDefine .\nNote that a.s. .\nLet .\nThus, above inequality becomes\nThus the above inequality becomes\nNote that and , then using the discrete Gronwall lemma (Lemma B.2 ###reference_theorem2###) we have\nFollowing the similar logic as in Lemma 1 in Borkar (2009 ###reference_b5###), we can extend the above result to the case where .\nThen using the proof of Theorem 2 of Chapter 2 in Borkar (2009 ###reference_b5###), we get a.s. and thus by Lemma B.17 ###reference_theorem17### the proof can be concluded.\n\u220e"
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"section_id": "Appendix 3",
|
| 123 |
+
"parent_section_id": null,
|
| 124 |
+
"section_name": "Appendix C Convergence of the DR -learning Algorithm",
|
| 125 |
+
"text": "Before we start the proof of the DR -learning algorithm, we first introduce the following lemma.\nDenote\n. Given that , then we have .\nNote that for , .\nAlso we know that when ,\nThen we can conclude that . Moreover, as , we know , which concludes that .\n\u220e\nNote that when reward is bounded by . Thus in our case and then we denote .\nNow we are ready to prove the convergence of the DR -learning algorithm.\nFor theoretical analysis, we consider the clipping version of our DR -learning algorithm.\nWe define the filtration generated by the historical trajectory,\nIn the following analysis, we fix for a but ignore the dependence for notation simplicity.\nFollowing the roadmap in Section 3.4, we rewrite the algorithm as\nHere for theoretical analysis, we add a clipping operator and compared with the algorithm presented in the main text.\nWe first proceed by first identifying the terms in Equation 26 ###reference_### and 27 ###reference_### and studying the corresponding ODEs\nAs and is in fact irrelavant to the and , we analyze their equilibria seperately. For notation convenience, we denote .\nFor ODE 26 ###reference_### and each , it is easy to know there exists a unique global asymtotically stable equilibrium .\nSimilarly, For ODE 27 ###reference_### and each , there exists a unique global asymtotically stable equilibrium .\nSecond, and .\nNote that for any , , and .\nThus , which leads to .\nSince and for any , we have,\nwhere . Similarly, we can conclude that for some .\nNext we analyze the second loop.\nwhere\nThe global convergence point is .\nFinally we arrive to the outer loop, i.e.,\nBy using the dual form of Cressie-Read Divergence (Lemma 3.1 ###reference_theorem1###), we know that this is equivilant to\nfor ambiguity set using Cressie-Read of divergence.\nDenote and thus\nwe can rewrite the above ODE as\nFollowing , we consider its infity version, i.e., .\nThis is a contraction by Theorem 3.2 in Iyengar (2005 ###reference_b15###).\nBy the proof in Section 3.2 in Borkar & Meyn (2000 ###reference_b6###), we know the contraction can lead to the global unique equilibrium point in the ode.\nThus we finish verifying all the conditions in Section B.3 ###reference_###, which can lead to the desired result.\n\u220e"
|
| 126 |
+
}
|
| 127 |
+
],
|
| 128 |
+
"tables": {
|
| 129 |
+
"1": {
|
| 130 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A1.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T1.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"A1.T1.4.4.5\">Environment</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"A1.T1.1.1.1\">Maximum Training Step \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"A1.T1.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"A1.T1.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T1.4.4.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T1.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A1.T1.8.8.5\">CartPole</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T1.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T1.8.8.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T1.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"A1.T1.12.12.5\">LunarLander</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A1.T1.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A1.T1.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A1.T1.11.11.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T1.12.12.4\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Different Hyperparamers between CartPole and LunarLander</figcaption>\n</figure>",
|
| 131 |
+
"capture": "Table 1: Different Hyperparamers between CartPole and LunarLander"
|
| 132 |
+
}
|
| 133 |
+
},
|
| 134 |
+
"image_paths": {
|
| 135 |
+
"1(a)": {
|
| 136 |
+
"figure_path": "2301.11721v2_figure_1(a).png",
|
| 137 |
+
"caption": "(a) Environment\nFigure 1: The Cliffwalking environment and the learned policies for different \u03c1\ud835\udf0c\\rhoitalic_\u03c1\u2019s.",
|
| 138 |
+
"url": "http://arxiv.org/html/2301.11721v2/x1.png"
|
| 139 |
+
},
|
| 140 |
+
"1(b)": {
|
| 141 |
+
"figure_path": "2301.11721v2_figure_1(b).png",
|
| 142 |
+
"caption": "(b) Nonrobust\nFigure 1: The Cliffwalking environment and the learned policies for different \u03c1\ud835\udf0c\\rhoitalic_\u03c1\u2019s.",
|
| 143 |
+
"url": "http://arxiv.org/html/2301.11721v2/x2.png"
|
| 144 |
+
},
|
| 145 |
+
"1(c)": {
|
| 146 |
+
"figure_path": "2301.11721v2_figure_1(c).png",
|
| 147 |
+
"caption": "(c) \u03c1=1.0\ud835\udf0c1.0\\rho=1.0italic_\u03c1 = 1.0\nFigure 1: The Cliffwalking environment and the learned policies for different \u03c1\ud835\udf0c\\rhoitalic_\u03c1\u2019s.",
|
| 148 |
+
"url": "http://arxiv.org/html/2301.11721v2/x3.png"
|
| 149 |
+
},
|
| 150 |
+
"1(d)": {
|
| 151 |
+
"figure_path": "2301.11721v2_figure_1(d).png",
|
| 152 |
+
"caption": "(d) \u03c1=1.5\ud835\udf0c1.5\\rho=1.5italic_\u03c1 = 1.5\nFigure 1: The Cliffwalking environment and the learned policies for different \u03c1\ud835\udf0c\\rhoitalic_\u03c1\u2019s.",
|
| 153 |
+
"url": "http://arxiv.org/html/2301.11721v2/x4.png"
|
| 154 |
+
},
|
| 155 |
+
"2(a)": {
|
| 156 |
+
"figure_path": "2301.11721v2_figure_2(a).png",
|
| 157 |
+
"caption": "(a) Return\nFigure 2: Averaged return and steps with 100 random seeds in the perturbed environments. \u03c1=0\ud835\udf0c0\\rho=0italic_\u03c1 = 0 corresponds to the non-robust Q\ud835\udc44Qitalic_Q-learning. R\ud835\udc45Ritalic_R denotes the R\ud835\udc45Ritalic_R-contamination ambiguity set.",
|
| 158 |
+
"url": "http://arxiv.org/html/2301.11721v2/x5.png"
|
| 159 |
+
},
|
| 160 |
+
"2(b)": {
|
| 161 |
+
"figure_path": "2301.11721v2_figure_2(b).png",
|
| 162 |
+
"caption": "(b) Episode length\nFigure 2: Averaged return and steps with 100 random seeds in the perturbed environments. \u03c1=0\ud835\udf0c0\\rho=0italic_\u03c1 = 0 corresponds to the non-robust Q\ud835\udc44Qitalic_Q-learning. R\ud835\udc45Ritalic_R denotes the R\ud835\udc45Ritalic_R-contamination ambiguity set.",
|
| 163 |
+
"url": "http://arxiv.org/html/2301.11721v2/x6.png"
|
| 164 |
+
},
|
| 165 |
+
"2(c)": {
|
| 166 |
+
"figure_path": "2301.11721v2_figure_2(c).png",
|
| 167 |
+
"caption": "(c) Value of various k\ud835\udc58kitalic_k and \u03c1\ud835\udf0c\\rhoitalic_\u03c1\nFigure 2: Averaged return and steps with 100 random seeds in the perturbed environments. \u03c1=0\ud835\udf0c0\\rho=0italic_\u03c1 = 0 corresponds to the non-robust Q\ud835\udc44Qitalic_Q-learning. R\ud835\udc45Ritalic_R denotes the R\ud835\udc45Ritalic_R-contamination ambiguity set.",
|
| 168 |
+
"url": "http://arxiv.org/html/2301.11721v2/x7.png"
|
| 169 |
+
},
|
| 170 |
+
"3": {
|
| 171 |
+
"figure_path": "2301.11721v2_figure_3.png",
|
| 172 |
+
"caption": "Figure 3: The training curves in the Cliffwalking environment. Each curve is averaged over 100 random seeds and shaded by their standard deviations. The dashed line is the optimal robust value with corresponding k\ud835\udc58kitalic_k and \u03c1\ud835\udf0c\\rhoitalic_\u03c1.",
|
| 173 |
+
"url": "http://arxiv.org/html/2301.11721v2/x8.png"
|
| 174 |
+
},
|
| 175 |
+
"4": {
|
| 176 |
+
"figure_path": "2301.11721v2_figure_4.png",
|
| 177 |
+
"caption": "Figure 4: Sample complexity comparisons in Cliffwalking environment with Liu\u2019s and Model-based algorithms. Each curve is averaged over 100 random seeds and shaded by their standard deviations.",
|
| 178 |
+
"url": "http://arxiv.org/html/2301.11721v2/x9.png"
|
| 179 |
+
},
|
| 180 |
+
"5": {
|
| 181 |
+
"figure_path": "2301.11721v2_figure_5.png",
|
| 182 |
+
"caption": "Figure 5: The return in the CartPole and LunarLander environment. Each curve is averaged over 100 random seeds and shaded by their standard deviations. AP: Action Perturbation; FMP: Force Mag Perturbation; EPP: Engines Power Perturbation.",
|
| 183 |
+
"url": "http://arxiv.org/html/2301.11721v2/x10.png"
|
| 184 |
+
},
|
| 185 |
+
"6": {
|
| 186 |
+
"figure_path": "2301.11721v2_figure_6.png",
|
| 187 |
+
"caption": "Figure 6: Averaged return in the American call option problem. \u03c1=0.0\ud835\udf0c0.0\\rho=0.0italic_\u03c1 = 0.0 is the non-robust Q\ud835\udc44Qitalic_Q-learning.",
|
| 188 |
+
"url": "http://arxiv.org/html/2301.11721v2/x11.png"
|
| 189 |
+
},
|
| 190 |
+
"7": {
|
| 191 |
+
"figure_path": "2301.11721v2_figure_7.png",
|
| 192 |
+
"caption": "Figure 7: Convergence curve of DR Q\ud835\udc44Qitalic_Q-learning algorithm to the true DR value under different \u03c1\ud835\udf0c\\rhoitalic_\u03c1\u2019s and k\ud835\udc58kitalic_k\u2019s. Each curve is averaged over 10 random seeds and shaded by their standard deviation. The dashed line is the optimal robust value with corresponding k\ud835\udc58kitalic_k and \u03c1\ud835\udf0c\\rhoitalic_\u03c1.",
|
| 193 |
+
"url": "http://arxiv.org/html/2301.11721v2/x12.png"
|
| 194 |
+
},
|
| 195 |
+
"8": {
|
| 196 |
+
"figure_path": "2301.11721v2_figure_8.png",
|
| 197 |
+
"caption": "Figure 8: Sample complexity comparisons in American option environment with other DRRL algorithms.\nThe dashed line is the optimal robust value with corresponding k\ud835\udc58kitalic_k and \u03c1\ud835\udf0c\\rhoitalic_\u03c1. The x\ud835\udc65xitalic_x-axis is in log10 scale. Each curve is averaged over 10 random seeds and shaded by their one standard deviation.\nThe dashed line is the optimal robust value with corresponding k\ud835\udc58kitalic_k and \u03c1\ud835\udf0c\\rhoitalic_\u03c1.",
|
| 198 |
+
"url": "http://arxiv.org/html/2301.11721v2/x13.png"
|
| 199 |
+
}
|
| 200 |
+
},
|
| 201 |
+
"validation": true,
|
| 202 |
+
"references": [
|
| 203 |
+
{
|
| 204 |
+
"1": {
|
| 205 |
+
"title": "Maximum a posteriori policy optimisation.",
|
| 206 |
+
"author": "Abdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., and\nRiedmiller, M.",
|
| 207 |
+
"venue": "arXiv preprint arXiv:1806.06920, 2018.",
|
| 208 |
+
"url": null
|
| 209 |
+
}
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"2": {
|
| 213 |
+
"title": "Wasserstein Robust Reinforcement Learning, 2019.",
|
| 214 |
+
"author": "Abdullah, M. A., Ren, H., Ammar, H. B., Milenkovic, V., Luo, R., Zhang, M., and\nWang, J.",
|
| 215 |
+
"venue": "URL http://arxiv.org/abs/1907.13196.",
|
| 216 |
+
"url": null
|
| 217 |
+
}
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"3": {
|
| 221 |
+
"title": "Robust reinforcement learning using least squares policy iteration\nwith provable performance guarantees.",
|
| 222 |
+
"author": "Badrinath, K. P. and Kalathil, D.",
|
| 223 |
+
"venue": "In International Conference on Machine Learning, pp. 511\u2013520. PMLR, 2021.",
|
| 224 |
+
"url": null
|
| 225 |
+
}
|
| 226 |
+
},
|
| 227 |
+
{
|
| 228 |
+
"4": {
|
| 229 |
+
"title": "Unbiased monte carlo for optimization and functions of expectations\nvia multi-level randomization.",
|
| 230 |
+
"author": "Blanchet, J. H. and Glynn, P. W.",
|
| 231 |
+
"venue": "In 2015 Winter Simulation Conference (WSC), pp. 3656\u20133667.\nIEEE, 2015.",
|
| 232 |
+
"url": null
|
| 233 |
+
}
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"5": {
|
| 237 |
+
"title": "Stochastic approximation: a dynamical systems viewpoint,\nvolume 48.",
|
| 238 |
+
"author": "Borkar, V. S.",
|
| 239 |
+
"venue": "Springer, 2009.",
|
| 240 |
+
"url": null
|
| 241 |
+
}
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"6": {
|
| 245 |
+
"title": "The ode method for convergence of stochastic approximation and\nreinforcement learning.",
|
| 246 |
+
"author": "Borkar, V. S. and Meyn, S. P.",
|
| 247 |
+
"venue": "SIAM Journal on Control and Optimization, 38(2):447\u2013469, 2000.",
|
| 248 |
+
"url": null
|
| 249 |
+
}
|
| 250 |
+
},
|
| 251 |
+
{
|
| 252 |
+
"7": {
|
| 253 |
+
"title": "Openai gym.",
|
| 254 |
+
"author": "Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang,\nJ., and Zaremba, W.",
|
| 255 |
+
"venue": "arXiv preprint arXiv:1606.01540, 2016.",
|
| 256 |
+
"url": null
|
| 257 |
+
}
|
| 258 |
+
},
|
| 259 |
+
{
|
| 260 |
+
"8": {
|
| 261 |
+
"title": "Option pricing: A simplified approach.",
|
| 262 |
+
"author": "Cox, J. C., Ross, S. A., and Rubinstein, M.",
|
| 263 |
+
"venue": "Journal of financial Economics, 7(3):229\u2013263, 1979.",
|
| 264 |
+
"url": null
|
| 265 |
+
}
|
| 266 |
+
},
|
| 267 |
+
{
|
| 268 |
+
"9": {
|
| 269 |
+
"title": "Multinomial goodness-of-fit tests.",
|
| 270 |
+
"author": "Cressie, N. and Read, T. R.",
|
| 271 |
+
"venue": "Journal of the Royal Statistical Society: Series B\n(Methodological), 46(3):440\u2013464, 1984.",
|
| 272 |
+
"url": null
|
| 273 |
+
}
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"10": {
|
| 277 |
+
"title": "Model-free risk-sensitive reinforcement learning.",
|
| 278 |
+
"author": "Del\u00e9tang, G., Grau-Moya, J., Kunesch, M., Genewein, T., Brekelmans, R.,\nLegg, S., and Ortega, P. A.",
|
| 279 |
+
"venue": "arXiv preprint arXiv:2111.02907, 2021.",
|
| 280 |
+
"url": null
|
| 281 |
+
}
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"11": {
|
| 285 |
+
"title": "Soft-robust actor-critic policy-gradient.",
|
| 286 |
+
"author": "Derman, E., Mankowitz, D. J., Mann, T. A., and Mannor, S.",
|
| 287 |
+
"venue": "arXiv preprint arXiv:1803.04848, 2018.",
|
| 288 |
+
"url": null
|
| 289 |
+
}
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"12": {
|
| 293 |
+
"title": "Learning models with uniform performance via distributionally robust\noptimization.",
|
| 294 |
+
"author": "Duchi, J. C. and Namkoong, H.",
|
| 295 |
+
"venue": "The Annals of Statistics, 49(3):1378\u20131406, 2021.",
|
| 296 |
+
"url": null
|
| 297 |
+
}
|
| 298 |
+
},
|
| 299 |
+
{
|
| 300 |
+
"13": {
|
| 301 |
+
"title": "Robust markov decision processes: Beyond rectangularity.",
|
| 302 |
+
"author": "Goyal, V. and Grand-Clement, J.",
|
| 303 |
+
"venue": "Mathematics of Operations Research, 2022.",
|
| 304 |
+
"url": null
|
| 305 |
+
}
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"14": {
|
| 309 |
+
"title": "Partial policy iteration for l1-robust markov decision processes.",
|
| 310 |
+
"author": "Ho, C. P., Petrik, M., and Wiesemann, W.",
|
| 311 |
+
"venue": "J. Mach. Learn. Res., 22:275\u20131, 2021.",
|
| 312 |
+
"url": null
|
| 313 |
+
}
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"15": {
|
| 317 |
+
"title": "Robust dynamic programming.",
|
| 318 |
+
"author": "Iyengar, G. N.",
|
| 319 |
+
"venue": "Mathematics of Operations Research, 30(2):257\u2013280, 2005.",
|
| 320 |
+
"url": null
|
| 321 |
+
}
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"16": {
|
| 325 |
+
"title": "Auto-encoding variational bayes.",
|
| 326 |
+
"author": "Kingma, D. P. and Welling, M.",
|
| 327 |
+
"venue": "arXiv preprint arXiv:1312.6114, 2013.",
|
| 328 |
+
"url": null
|
| 329 |
+
}
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"17": {
|
| 333 |
+
"title": "Reinforcement learning in robust markov decision processes.",
|
| 334 |
+
"author": "Lim, S. H., Xu, H., and Mannor, S.",
|
| 335 |
+
"venue": "Advances in Neural Information Processing Systems, 26, 2013.",
|
| 336 |
+
"url": null
|
| 337 |
+
}
|
| 338 |
+
},
|
| 339 |
+
{
|
| 340 |
+
"18": {
|
| 341 |
+
"title": "Distributionally robust -learning.",
|
| 342 |
+
"author": "Liu, Z., Bai, Q., Blanchet, J., Dong, P., Xu, W., Zhou, Z., and Zhou, Z.",
|
| 343 |
+
"venue": "In International Conference on Machine Learning, pp. 13623\u201313643. PMLR, 2022.",
|
| 344 |
+
"url": null
|
| 345 |
+
}
|
| 346 |
+
},
|
| 347 |
+
{
|
| 348 |
+
"19": {
|
| 349 |
+
"title": "Distributionally robust offline reinforcement learning with linear\nfunction approximation.",
|
| 350 |
+
"author": "Ma, X., Liang, Z., Xia, L., Zhang, J., Blanchet, J., Liu, M., Zhao, Q., and\nZhou, Z.",
|
| 351 |
+
"venue": "arXiv preprint arXiv:2209.06620, 2022.",
|
| 352 |
+
"url": null
|
| 353 |
+
}
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"20": {
|
| 357 |
+
"title": "Bias and variance in value function estimation.",
|
| 358 |
+
"author": "Mannor, S., Simester, D., Sun, P., and Tsitsiklis, J. N.",
|
| 359 |
+
"venue": "In Proceedings of the twenty-first international conference on\nMachine learning, pp. 72, 2004.",
|
| 360 |
+
"url": null
|
| 361 |
+
}
|
| 362 |
+
},
|
| 363 |
+
{
|
| 364 |
+
"21": {
|
| 365 |
+
"title": "Human-level control through deep reinforcement learning.",
|
| 366 |
+
"author": "Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare,\nM. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al.",
|
| 367 |
+
"venue": "nature, 518(7540):529\u2013533, 2015.",
|
| 368 |
+
"url": null
|
| 369 |
+
}
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"22": {
|
| 373 |
+
"title": "Robust q-learning algorithm for markov decision processes under\nwasserstein uncertainty.",
|
| 374 |
+
"author": "Neufeld, A. and Sester, J.",
|
| 375 |
+
"venue": "ArXiv, abs/2210.00898, 2022.",
|
| 376 |
+
"url": null
|
| 377 |
+
}
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"23": {
|
| 381 |
+
"title": "Robust control of markov decision processes with uncertain transition\nmatrices.",
|
| 382 |
+
"author": "Nilim, A. and El Ghaoui, L.",
|
| 383 |
+
"venue": "Operations Research, 53(5):780\u2013798, 2005.",
|
| 384 |
+
"url": null
|
| 385 |
+
}
|
| 386 |
+
},
|
| 387 |
+
{
|
| 388 |
+
"24": {
|
| 389 |
+
"title": "Sample complexity of robust reinforcement learning with a generative\nmodel.",
|
| 390 |
+
"author": "Panaganti, K. and Kalathil, D.",
|
| 391 |
+
"venue": "In International Conference on Artificial Intelligence and\nStatistics, pp. 9582\u20139602. PMLR, 2022.",
|
| 392 |
+
"url": null
|
| 393 |
+
}
|
| 394 |
+
},
|
| 395 |
+
{
|
| 396 |
+
"25": {
|
| 397 |
+
"title": "Robust reinforcement learning using offline data.",
|
| 398 |
+
"author": "Panaganti, K., Xu, Z., Kalathil, D., and Ghavamzadeh, M.",
|
| 399 |
+
"venue": "Advances in neural information processing systems,\n35:32211\u201332224, 2022.",
|
| 400 |
+
"url": null
|
| 401 |
+
}
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"26": {
|
| 405 |
+
"title": "Reinforcement learning under model mismatch.",
|
| 406 |
+
"author": "Roy, A., Xu, H., and Pokutta, S.",
|
| 407 |
+
"venue": "Advances in neural information processing systems, 30, 2017.",
|
| 408 |
+
"url": null
|
| 409 |
+
}
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"27": {
|
| 413 |
+
"title": "Proximal policy optimization algorithms.",
|
| 414 |
+
"author": "Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O.",
|
| 415 |
+
"venue": "arXiv preprint arXiv:1707.06347, 2017.",
|
| 416 |
+
"url": null
|
| 417 |
+
}
|
| 418 |
+
},
|
| 419 |
+
{
|
| 420 |
+
"28": {
|
| 421 |
+
"title": "Distributionally Robust Model-Based Offline Reinforcement\nLearning with Near-Optimal Sample Complexity.",
|
| 422 |
+
"author": "Shi, L. and Chi, Y.",
|
| 423 |
+
"venue": "URL http://arxiv.org/abs/2208.05767.",
|
| 424 |
+
"url": null
|
| 425 |
+
}
|
| 426 |
+
},
|
| 427 |
+
{
|
| 428 |
+
"29": {
|
| 429 |
+
"title": "Mastering the game of go with deep neural networks and tree search.",
|
| 430 |
+
"author": "Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche,\nG., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M.,\net al.",
|
| 431 |
+
"venue": "nature, 529(7587):484\u2013489, 2016.",
|
| 432 |
+
"url": null
|
| 433 |
+
}
|
| 434 |
+
},
|
| 435 |
+
{
|
| 436 |
+
"30": {
|
| 437 |
+
"title": "Scaling up robust mdps using function approximation.",
|
| 438 |
+
"author": "Tamar, A., Mannor, S., and Xu, H.",
|
| 439 |
+
"venue": "In International conference on machine learning, pp. 181\u2013189. PMLR, 2014.",
|
| 440 |
+
"url": null
|
| 441 |
+
}
|
| 442 |
+
},
|
| 443 |
+
{
|
| 444 |
+
"31": {
|
| 445 |
+
"title": "Asynchronous stochastic approximation and q-learning.",
|
| 446 |
+
"author": "Tsitsiklis, J. N.",
|
| 447 |
+
"venue": "Machine learning, 16:185\u2013202, 1994.",
|
| 448 |
+
"url": null
|
| 449 |
+
}
|
| 450 |
+
},
|
| 451 |
+
{
|
| 452 |
+
"32": {
|
| 453 |
+
"title": "Grandmaster level in StarCraft II using multi-agent reinforcement\nlearning.",
|
| 454 |
+
"author": "Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung,\nJ., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., Oh, J., Horgan, D.,\nKroiss, M., Danihelka, I., Huang, A., Sifre, L., Cai, T., Agapiou, J. P.,\nJaderberg, M., Vezhnevets, A. S., Leblond, R., Pohlen, T., Dalibard, V.,\nBudden, D., Sulsky, Y., Molloy, J., Paine, T. L., Gulcehre, C., Wang, Z.,\nPfaff, T., Wu, Y., Ring, R., Yogatama, D., W\u00fcnsch, D., McKinney, K., Smith,\nO., Schaul, T., Lillicrap, T., Kavukcuoglu, K., Hassabis, D., Apps, C., and\nSilver, D.",
|
| 455 |
+
"venue": "575(7782):350\u2013354, 2019.",
|
| 456 |
+
"url": null
|
| 457 |
+
}
|
| 458 |
+
},
|
| 459 |
+
{
|
| 460 |
+
"33": {
|
| 461 |
+
"title": "Online robust reinforcement learning with model uncertainty.",
|
| 462 |
+
"author": "Wang, Y. and Zou, S.",
|
| 463 |
+
"venue": "Advances in Neural Information Processing Systems,\n34:7193\u20137206, 2021.",
|
| 464 |
+
"url": null
|
| 465 |
+
}
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"34": {
|
| 469 |
+
"title": "Robust markov decision processes.",
|
| 470 |
+
"author": "Wiesemann, W., Kuhn, D., and Rustem, B.",
|
| 471 |
+
"venue": "Mathematics of Operations Research, 38(1):153\u2013183, 2013.",
|
| 472 |
+
"url": null
|
| 473 |
+
}
|
| 474 |
+
},
|
| 475 |
+
{
|
| 476 |
+
"35": {
|
| 477 |
+
"title": "Distributionally robust markov decision processes.",
|
| 478 |
+
"author": "Xu, H. and Mannor, S.",
|
| 479 |
+
"venue": "Advances in Neural Information Processing Systems, 23, 2010.",
|
| 480 |
+
"url": null
|
| 481 |
+
}
|
| 482 |
+
},
|
| 483 |
+
{
|
| 484 |
+
"36": {
|
| 485 |
+
"title": "Wasserstein distributionally robust stochastic control: A data-driven\napproach.",
|
| 486 |
+
"author": "Yang, I.",
|
| 487 |
+
"venue": "IEEE Transactions on Automatic Control, 66:3863\u20133870, 2018.",
|
| 488 |
+
"url": null
|
| 489 |
+
}
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"37": {
|
| 493 |
+
"title": "Toward theoretical understandings of robust markov decision\nprocesses: Sample complexity and asymptotics.",
|
| 494 |
+
"author": "Yang, W., Zhang, L., and Zhang, Z.",
|
| 495 |
+
"venue": "The Annals of Statistics, 50(6):3223\u20133248, 2022.",
|
| 496 |
+
"url": null
|
| 497 |
+
}
|
| 498 |
+
},
|
| 499 |
+
{
|
| 500 |
+
"38": {
|
| 501 |
+
"title": "Finite-sample regret bound for distributionally robust offline\ntabular reinforcement learning.",
|
| 502 |
+
"author": "Zhou, Z., Zhou, Z., Bai, Q., Qiu, L., Blanchet, J., and Glynn, P.",
|
| 503 |
+
"venue": "In International Conference on Artificial Intelligence and\nStatistics, pp. 3331\u20133339. PMLR, 2021.",
|
| 504 |
+
"url": null
|
| 505 |
+
}
|
| 506 |
+
}
|
| 507 |
+
],
|
| 508 |
+
"url": "http://arxiv.org/html/2301.11721v2"
|
| 509 |
+
}
|
20240921/2303.02770v2.json
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Universal distribution of the empirical coverage in split conformal prediction",
|
| 3 |
+
"abstract": "When split conformal prediction operates in batch mode with exchangeable data, we determine the exact distribution of the empirical coverage of prediction sets produced for a finite batch of future observables, as well as the exact distribution of its almost sure limit when the batch size goes to infinity. Both distributions are universal, being determined solely by the nominal miscoverage level and the calibration sample size, thereby establishing a criterion for choosing the minimum required calibration sample size in applications.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Conformal prediction is a framework developed to quantify the confidence in the forecasts made by general predictive models which is quickly moving the field of machine learning from a stage dominated by point predictions to a new period in which forecasts are summarized by prediction sets with statistical guarantees. Several features make conformal prediction appealing for use with contemporary machine learning algorithms: it is universal (distribution-free), able to handle high-dimensional data, model agnostic, and its properties hold for finite samples [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###]. Notably, the split conformal prediction algorithm [4 ###reference_b4###, 5 ###reference_b5###] is a widely adopted conformalization technique which strikes a balance between predictive properties and computational cost. Our goal in this paper is to identify the exact distribution of the empirical coverage of prediction sets produced by the split conformal prediction procedure for a finite batch of future observables, as well as to determine the exact distribution of its almost sure limit when the batch size tends to infinity. Both distributions are universal in the sense that they are determined solely by the nominal miscoverage level and the calibration sample size. The distribution of the empirical coverage was investigated for the first time in [6 ###reference_b6###] and [7 ###reference_b7###], with further discussion in [8 ###reference_b8###]. Our contribution consists in a formulation that emphasizes the role of the data exchangeability assumption and the combinatorial nature of the aforementioned properties of the empirical coverage, which are derived using standard exchangeability tools. Although this investigation pertains to the foundations of conformal prediction, the results are eminently practical and lead to a criterion summarized in Table 1 for the choice of the calibration sample size in applications."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Split conformal prediction",
|
| 15 |
+
"text": "Let denote the underlying probability space from which we induce the distributions of all random objects considered in the paper.\nA sequence of random objects is exchangeable if, for every and every permutation , the random tuples and have the same distribution.\nWe are in a supervised learning setting [9 ###reference_b9###] in which for each sample unit we have a -dimensional vector of predictors and a response variable . Specifically, in regression tasks with univariate response we take and in classification problems is a set of class labels. We have a data sequence of random pairs laid out as\nwhich is modeled by us as being exchangeable. At the beginning of the sequence we have the training sample , of size , followed by the calibration sample , of size , and the sequence of future observables . In applications, the available data is randomly split into the training and calibration samples, hence the name split conformal prediction, also known as the inductive case of conformal prediction. The data exchangeability assumption allows us to conveniently place the training sample at the beginning of the sequence. Let be the smallest sub--field of with respect to which the training sample is measurable.\nA conformity function is a mapping such that is -measurable for every and every . The sequence of conformity scores associated with a conformity function is defined by . We say that a conformity function is regular with respect to a specific data sequence if there are no ties among the corresponding conformity scores almost surely.\nNote that the regularity of a specific conformity function is contextual, being inherently dependent on the distribution of the underlying data sequence. Technically, we can always avoid ties among the sequence of conformity scores almost surely by introducing a properly constructed ancillary tie-breaking sequence.\nIn regression problems, let be a regression function estimator. A standard choice [4 ###reference_b4###, 5 ###reference_b5###] is to use the conformity function . Conformalized quantile regression [10 ###reference_b10###] is a widely adopted alternative. For , let be the conditional th quantile function and suppose that we have an estimator of . Choose and define the conformity function . The choice of and is discussed in [10 ###reference_b10###]. In classification problems, let be a classification algorithm outputting probabilities for each one of the class labels . In this classification case, we can take our conformity function to be .\nConformity functions are agnostic to the choice of the specific models or algorithms used to construct , , and in Example 1 ###reference_###. The intuition is that the associated conformity scores measure the ability of the model to make accurate predictions on the calibration sample, whose information is not used in the model\u00b4s training process, and the assumed data sequence exchangeability transfers this assessment of the model\u2019s predictive capacity from the calibration sample to the sequence of future observables. The following result is proved in the Appendix.\nUnder the data exchangeability assumption, the sequence of conformity scores is exchangeable.\nFor a real number , let and denote the ceiling and the floor of , respectively.\nFor a regular conformity function , denote the associated ordered calibration sample conformity scores by . Let be a specified nominal miscoverage level satisfying , in which case we say that the pair is feasible. Define the random conformal prediction set by\nfor a suitable -field of subsets of denoted by . We use the notation .\nLet , for an outcome . It follows from Definition 3 ###reference_3### and the definitions in Example 1 ###reference_### that for a future vector of predictors the observed conformal prediction sets have the forms: , for the standard conformity function, , for conformalized quantile regression, and , for classification.\nThe first major consequence of Lemma 1 ###reference_1### is the classical marginal validity property [4 ###reference_b4###, 5 ###reference_b5###] of the conformal prediction sets introduced in Definition 3 ###reference_3###. This property can be described briefly as follows. For a regular conformity function , using the notations and conditions in Definition 3 ###reference_3###, the distributional symmetry expressed in Lemma 1 ###reference_1### and the fact that we have no ties almost surely among the sequence of conformity scores, ensure that, for every , the conformity score for a future random pair is uniformly ranked among the ordered calibration sample conformity scores: , for . By choosing , noting that , for every , and considering that if and only if (as per Definitions 2 ###reference_2### and 3 ###reference_3###), we have the marginal validity property:"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Empirical coverage distribution",
|
| 21 |
+
"text": "Using the notations and conditions in Definition 3 ###reference_3###, let be a sequence of coverage indicators defined by , if , and , otherwise. The empirical coverage of a batch of future observables is the random variable .\nIn general, the coverage indicators are dependent random variables, since for all future observables the corresponding conformal prediction sets in Definition 3 ###reference_3### are defined in terms of the same calibration sample conformity score . This would still be the case even if we had started with the stronger assumption of an independent and identically distributed data sequence. The interesting fact is that Definition 4 ###reference_4### inherits through Lemma 1 ###reference_1### the distributional symmetry implied by the data exchangeability assumption, giving us the following result, proved in the Appendix.\nUnder the data exchangeability assumption, for a regular conformity function, the sequence of coverage indicators is exchangeable and is distributed as a random variable, to the effect that the distribution of the empirical coverage is given by\nfor , and every future batch size .\nBy symmetry, Definition 4 ###reference_4### and the exchangeability of the sequence of coverage indicators established in Theorem 1 ###reference_1### yield that . Consequently, we can interpret the marginal validity property (1 ###reference_###) as partial information about the distribution of the empirical coverage. Specifically, as an inequality constraint on the expectation of . The following result, proved in the Appendix as a direct consequence of Theorem 1 ###reference_1### and de Finetti\u2019s representation theorem, identifies the distribution of the almost sure limit of the empirical coverage when the future batch size tends to infinity.\nUnder the data exchangeability assumption, for a regular conformity function, the empirical coverage converges almost surely, when the future batch size tends to infinity, to a random variable with distribution ."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Calibration sample size and concluding remarks",
|
| 27 |
+
"text": "In applications, we typically use a trained model to construct a large number of prediction intervals and Theorem 2 ###reference_2### gives us a criterion to determine the minimum calibration sample size required to control the empirical coverage of an infinite batch of future observables. Given a nominal miscoverage level , we specify an and a tolerance probability , looking for the smallest calibration sample size such that the empirical coverage of an infinite batch of future observables is within of , with a probability of at least . Formally, recalling from Definition 3 ###reference_3### that a feasible pair satisfies , which is equivalent to saying that the integer , the minimum required calibration size is given by\nTable 1 ###reference_### gives the values of the minimum required calibration sample size for different values of , , and . That an understanding of the distribution of the empirical coverage is necessary to determine the minimum required calibration sample sizes in applications of split conformal prediction was first discussed in [8 ###reference_b8###]. The calibration sample sizes presented in [8 ###reference_b8###] are slightly larger than the corresponding values in Table 1 ###reference_###. In the repository [11 ###reference_b11###] we have the R [12 ###reference_b12###] code used to determine the calibration sample sizes in Table 1 ###reference_### and a comparison with the corresponding values given in [8 ###reference_b8###]. Repository [11 ###reference_b11###] also contains a simulation illustrating the results in Theorems 1 ###reference_1### and 2 ###reference_2###."
|
| 28 |
+
}
|
| 29 |
+
],
|
| 30 |
+
"appendix": [
|
| 31 |
+
{
|
| 32 |
+
"section_id": "Appendix x1",
|
| 33 |
+
"parent_section_id": null,
|
| 34 |
+
"section_name": "Appendix. Proofs",
|
| 35 |
+
"text": "Recall that and . Since the conformity function in Definition 2 ###reference_2### is such that is -measurable for every and every , Doob-Dynkin\u2019s lemma (see [13 ###reference_b13###], Theorem A.42) implies that there is a measurable function\nsuch that , for every , and each . Hence, for Borel sets , we have\nFor any permutation ,\ndefine\nIf we consider only permutations such that , for , then and the data exchangeability assumption yields\nSince this restriction on still allows an arbitrary permutation of the conformity scores , and the argument holds for every , the desired exchangeability of the sequence of conformity scores follows.\n\u220e\nLet and , recalling from Definition 3 ###reference_3### that the pair is assumed to be feasible, so that . We will prove by induction on the batch size that\nin which , with , for . Due to the assumed regularity of the underlying conformity function , there are no ties among the sequence of conformity scores almost surely, and the distributional symmetry established in Lemma 1 ###reference_1### implies that is uniformly ranked among the calibration sample conformity scores . By Definitions 3 ###reference_3### and 4 ###reference_4###, for every , the coverage indicator if and only if , so that . Hence, and property holds for . By Lemma 1 ###reference_1### and the regularity of , the conformity score is ranked uniformly among the conformity scores . Moreover, the event , with , means that exactly of the conformity scores are less than or equal to . Hence, given that , with , we have that if and only if is ranked among the conformity scores , to the effect that\nNow, for the inductive step, suppose that property holds for some batch size . The product rule gives\nSince , in general we have that\nin which , with , for . Therefore, property holds for a batch with size , completing the inductive step and implying that property holds for every batch size . Inspection of the right hand side of reveals that the random vector ) is exchangeable, and since this holds for every batch size , we get as our first conclusion that the sequence of coverage indicators is exchangeable. Finally, the event is the union of mutually exclusive and, by exchangeability, equiprobable events of the form , in which , with . Therefore, property and Definition 4 ###reference_4### yield the desired result:\n\u220e\nBy Theorem 1 ###reference_1###, the sequence of coverage indicators is exchangeable, and de Finetti\u2019s representation theorem [13 ###reference_b13###] states that there is a random variable, say, , with distribution , such that, given that , the are conditionally independent and identically distributed with distribution , so that we have the integral representation\nfor . For , the event is the union of mutually exclusive and, by exchangeability, equiprobable events of the form , in which , with . Therefore, it follows from the integral representation above and Definition 4 ###reference_4### that\nLet the distribution of be dominated by Lebesgue measure with Radon-Nikodym derivative\nup to almost everywhere equivalence, in which and . This is a version of the density of a random variable with distribution. Using and the Leibniz rule for Radon-Nikodym derivatives (see [13 ###reference_b13###], Theorem A.79), we have that\nSince de Finetti\u2019s representation theorem states that the distribution of is unique and that converges almost surely to , when the batch size tends to infinity, the result follows by inspection of the distribution of the empirical coverage in Theorem 1 ###reference_1###.\n\u220e"
|
| 36 |
+
}
|
| 37 |
+
],
|
| 38 |
+
"tables": {
|
| 39 |
+
"1": {
|
| 40 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r\" id=\"S4.T1.4.4.5\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T1.1.1.1\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T1.2.2.2\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T1.3.3.3\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T1.4.4.4\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.5\">\n<th class=\"ltx_td ltx_nopad ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.5.5.1\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><svg height=\"13.39\" overflow=\"visible\" version=\"1.1\" width=\"47.76\"><g transform=\"translate(0,13.39) scale(1,-1)\"><path d=\"M 0,13.39 47.76,0\" stroke=\"#000000\" stroke-width=\"0.4\"></path><g class=\"ltx_svg_fog\" transform=\"translate(0,0)\"><g transform=\"translate(0,8.03) scale(1, -1)\"><foreignobject height=\"8.03\" overflow=\"visible\" width=\"23.88\">\n<span class=\"ltx_inline-block\" id=\"S4.T1.5.5.1.pic1.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S4.T1.5.5.1.pic1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.5.5.1.pic1.1.1.1.1\"></span>\n</span>\n</span></foreignobject></g></g><g class=\"ltx_svg_fog\" transform=\"translate(42.32,8.03)\"><g transform=\"translate(0,5.36) scale(1, -1)\"><foreignobject height=\"5.36\" overflow=\"visible\" width=\"5.44\">\n<span class=\"ltx_inline-block\" id=\"S4.T1.5.5.1.pic1.2.1\">\n<span class=\"ltx_inline-block ltx_align_right\" id=\"S4.T1.5.5.1.pic1.2.1.1\">\n<span class=\"ltx_p\" id=\"S4.T1.5.5.1.pic1.2.1.1.1\"></span>\n</span>\n</span></foreignobject></g></g></g></svg></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.5.2\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.2.1\" style=\"font-size:90%;\">90%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.5.3\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.3.1\" style=\"font-size:90%;\">95%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.5.5.4\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.4.1\" style=\"font-size:90%;\">99%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.5.5\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.5.1\" style=\"font-size:90%;\">90%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.5.6\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.6.1\" style=\"font-size:90%;\">95%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.5.5.7\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.7.1\" style=\"font-size:90%;\">99%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.5.8\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.8.1\" style=\"font-size:90%;\">90%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.5.9\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.9.1\" style=\"font-size:90%;\">95%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.5.5.10\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.10.1\" style=\"font-size:90%;\">99%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.5.11\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.11.1\" style=\"font-size:90%;\">90%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.5.12\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.12.1\" style=\"font-size:90%;\">95%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.5.5.13\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.13.1\" style=\"font-size:90%;\">99%</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.5.6.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.5.6.1.1\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.1.1\" style=\"font-size:90%;\">80%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.2\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.2.1\" style=\"font-size:90%;\">40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.3\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.3.1\" style=\"font-size:90%;\">57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.5.6.1.4\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.4.1\" style=\"font-size:90%;\">98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.5\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.5.1\" style=\"font-size:90%;\">170</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.6\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.6.1\" style=\"font-size:90%;\">241</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.5.6.1.7\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.7.1\" style=\"font-size:90%;\">418</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.8\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.8.1\" style=\"font-size:90%;\">4,326</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.9\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.9.1\" style=\"font-size:90%;\">6,142</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.5.6.1.10\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.10.1\" style=\"font-size:90%;\">10,611</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.11\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.11.1\" style=\"font-size:90%;\">17,314</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.12\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.12.1\" style=\"font-size:90%;\">24,581</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.5.6.1.13\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.13.1\" style=\"font-size:90%;\">42,457</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.7.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T1.5.7.2.1\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.1.1\" style=\"font-size:90%;\">85%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.2\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.2.1\" style=\"font-size:90%;\">30</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.3\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.3.1\" style=\"font-size:90%;\">42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.5.7.2.4\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.4.1\" style=\"font-size:90%;\">77</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.5\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.5.1\" style=\"font-size:90%;\">134</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.6\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.6.1\" style=\"font-size:90%;\">189</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.5.7.2.7\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.7.1\" style=\"font-size:90%;\">330</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.8\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.8.1\" style=\"font-size:90%;\">3,446</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.9\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.9.1\" style=\"font-size:90%;\">4,893</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.5.7.2.10\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.10.1\" style=\"font-size:90%;\">8,451</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.11\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.11.1\" style=\"font-size:90%;\">13,794</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.12\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.12.1\" style=\"font-size:90%;\">19,587</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.5.7.2.13\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.13.1\" style=\"font-size:90%;\">33,830</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.8.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T1.5.8.3.1\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.1.1\" style=\"font-size:90%;\">90%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.8.3.2\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.2.1\" style=\"font-size:90%;\">11</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.8.3.3\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.3.1\" style=\"font-size:90%;\">14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.5.8.3.4\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.4.1\" style=\"font-size:90%;\">47</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.8.3.5\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.5.1\" style=\"font-size:90%;\">90</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.8.3.6\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.6.1\" style=\"font-size:90%;\">128</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.5.8.3.7\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.7.1\" style=\"font-size:90%;\">227</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.8.3.8\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.8.1\" style=\"font-size:90%;\">2,429</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.8.3.9\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.9.1\" style=\"font-size:90%;\">3,448</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.5.8.3.10\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.10.1\" style=\"font-size:90%;\">5,958</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.8.3.11\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.11.1\" style=\"font-size:90%;\">9,733</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.8.3.12\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.12.1\" style=\"font-size:90%;\">13,821</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.5.8.3.13\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.13.1\" style=\"font-size:90%;\">23,875</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.9.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T1.5.9.4.1\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.1.1\" style=\"font-size:90%;\">95%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.5.9.4.2\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.2.1\" style=\"font-size:90%;\">19</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.5.9.4.3\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.3.1\" style=\"font-size:90%;\">19</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.5.9.4.4\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.4.1\" style=\"font-size:90%;\">29</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.5.9.4.5\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.5.1\" style=\"font-size:90%;\">22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.5.9.4.6\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.6.1\" style=\"font-size:90%;\">29</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.5.9.4.7\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.7.1\" style=\"font-size:90%;\">97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.5.9.4.8\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.8.1\" style=\"font-size:90%;\">1,270</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.5.9.4.9\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.9.1\" style=\"font-size:90%;\">1,806</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.5.9.4.10\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.10.1\" style=\"font-size:90%;\">3,132</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.5.9.4.11\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.11.1\" style=\"font-size:90%;\">5,125</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.5.9.4.12\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.12.1\" style=\"font-size:90%;\">7,278</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.5.9.4.13\" style=\"padding-top:1.15pt;padding-bottom:1.15pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.9.4.13.1\" style=\"font-size:90%;\">12,578</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Universal coverage tolerance table for split conformal prediction. For a nominal miscoverage level , an , and a tolerance probability , the table entries are the minimum required calibration sample sizes such that the empirical coverage of an infinite batch of future observables is within of , with a probability of at least , according to Theorem <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2303.02770v2#Thmthm2\" title=\"Theorem 2. \u2023 3 Empirical coverage distribution \u2023 Universal distribution of the empirical coverage in split conformal prediction\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>.</figcaption>\n</figure>",
|
| 41 |
+
"capture": "Table 1: Universal coverage tolerance table for split conformal prediction. For a nominal miscoverage level , an , and a tolerance probability , the table entries are the minimum required calibration sample sizes such that the empirical coverage of an infinite batch of future observables is within of , with a probability of at least , according to Theorem 2."
|
| 42 |
+
}
|
| 43 |
+
},
|
| 44 |
+
"image_paths": {},
|
| 45 |
+
"validation": true,
|
| 46 |
+
"references": [],
|
| 47 |
+
"url": "http://arxiv.org/html/2303.02770v2"
|
| 48 |
+
}
|
20240921/2304.10392v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2305.14254v2.json
ADDED
|
@@ -0,0 +1,281 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "A Shape-Newton Method for Free-boundary Problems Subject to The Bernoulli Boundary Condition",
|
| 3 |
+
"abstract": "We develop a shape-Newton method for solving generic free-boundary problems where one of the free-boundary conditions is governed by the nonlinear Bernoulli equation. The method is a Newton-like scheme that employs shape derivatives of the governing equations. In particular, we derive the shape derivative of the Bernoulli equation, which turns out to depend on the curvature in a nontrivial manner. The resulting shape-Newton method allows one to update the position of the free boundary by solving a special linear boundary-value problem at each iteration. We prove solvability of the linearised problem under certain conditions of the data. We verify the effectiveness of the shape-Newton approach applied to free-surface flow over a submerged triangular obstacle using a finite element method on a deforming mesh. We observe superlinear convergence behaviour for our shape-Newton method as opposed to the unfavourable linear rate of traditional methods.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Free boundary problems have many applications in fluid mechanics, such as open-channel flow, fluid/solid interaction and hydrodynamics. Solving such problems is difficult, because the geometry of the domain needs to be determined together with other variables in this problem. A simplified but important model problem is the Bernoulli free-boundary problem, which considers a (linear) Dirichlet boundary condition, as well as a Neumann boundary condition on the free boundary [4 ###reference_b4###, 23 ###reference_b23###]. This problem is not to be confused with the Bernoulli equation, which is the pressure boundary condition in irrotational fluid mechanics, and which we will study in this paper. The nonlinearity of the Bernoulli equation poses an additional challenge to numerical algorithms.\nThere are several computational approaches to solving free-boundary problems. The first is to iteratively solve the boundary value problem with a single free-boundary condition for the field variables on a fixed approximated domain, and then update the free surface derived from the remaining free boundary condition (which was not included in the boundary value problem). These fixed-point type methods are called trial methods, which converge linearly and cannot always find a solution. Details can be found, for example, in [23 ###reference_b23###, 3 ###reference_b3###, 19 ###reference_b19###].\nThe second approach is to formulate a shape optimization problem to improve the convergence rate. This method aims to construct a boundary-value problem as the state problem with one free-boundary condition and formulate a cost function with the remaining free-boundary condition. This approach may require gradient information. The formulation and application of shape optimization to free boundary problems can be found in, e.g. [9 ###reference_b9###, 15 ###reference_b15###, 16 ###reference_b16###, 27 ###reference_b27###, 28 ###reference_b28###, 30 ###reference_b30###].\nThe third approach requires linearising the whole system and applying a Newton-type method. The use of shape calculus and a Newton-type method is called the shape-Newton method. One linearisation method, called domain-map linearisation, requires to transform the free-boundary problem to an equivalent boundary value problem on a fixed domain and then linearise the transformed problem with respect to the domain map [20 ###reference_b20###, 31 ###reference_b31###]. An alternative way to linearise the free-boundary problem is to apply shape linearisation [5 ###reference_b5###, 25 ###reference_b25###]. K\u00e4rkk\u00e4inen and Tiihonen used this technique to solve Bernoulli free-boundary problems [17 ###reference_b17###, 18 ###reference_b18###]. The application to a more general Bernoulli free-boundary problem has been investigated in Van der Zee et al [32 ###reference_b32###] by considering the whole problem in one weak form, and using -continuous -splines to represent discrete free boundaries, in order to allow the exact computation of the curvature in the shape derivatives. Montardini et al. [21 ###reference_b21###] extend this method by incorporating a collocation approach to update the boundary, and compare both methods by imposing Dirichlet or periodic boundary conditions on the vertical fixed boundary of the domain. The results show that collocation scheme has slightly worse accuracy but higher efficiency.\nIn the current work, we derive the shape-Newton method for a free-boundary problem involving the nonlinear Bernoulli boundary condition on the free boundary. We use our approach to also re-derive the shape-Newton method for the simpler Bernoulli free-boundary problem (containing a Dirichlet boundary condition), which was obtained in [32 ###reference_b32###] using a slightly different derivation.111We note that there is a typo for the strong form of the linearised problem in [32 ###reference_b32###]. This mistake is rectified in this paper; see equations (42a ###reference_.1###)\u2013(42d ###reference_.4###). Similar to K\u00e4rkk\u00e4inen and Tiihonen, we set up two weak statements: One derived from the boundary value problem with the Neumann boundary condition, and the other from the remaining free boundary condition (Dirichlet condition or nonlinear Bernoulli condition).\nA key result in our work is the shape derivative of the Bernoulli equation.\nIt turns out that it has various equivalent expressions that are surprisingly elegant: The primary result involves the normal derivative of the velocity squared (), and we show in detail how this can be equivalently computed using only the tangential components of the velocity, suitably weighted by curvatures; see Section 5.3 ###reference_###.\nWe present our shape-Newton scheme in both strong and weak form, and without reference to any particular underlying discretisation. We study the solvability of the linearised system in the continuous setting, that is, we establish coercivity of a suitable bilinear form under certain conditions of the data. We are also able to establish discrete solvability for a particular finite element approximation using deforming meshes, under certain conditions. We show numerical experiments involving open channel flow over a submerged triangle. We observe that the shape-Newton method converges superlinearly, and the results agree well with exact solutions and results from [6 ###reference_b6###].\nThe contents of this paper are arranged as follows. We first introduce the model problems either with the Dirichlet boundary condition or the Bernoulli equation on the free boundary in Section 2 ###reference_###. In Section 3 ###reference_###, we derive the weak form for both problems. Then, we introduce some basic concepts about shape derivatives in Section 4 ###reference_###. We carry out shape linearisation by applying Hadamard shape derivatives for the free-boundary problem in Section 5 ###reference_###. In this Section, we also present the various equivalent expressions for the shape derivative of the Bernoulli equation. In Section 6 ###reference_###, we present the Newton-like schemes, and present solvability results for the involved linearised systems (details in Appendix B ###reference_### and C ###reference_###). The finite element scheme using deforming meshes is given in Section 7 ###reference_### (details of its discrete solvability in Appendix D ###reference_###), as well as numerical experiments. These are followed by Conclusions in Section 8 ###reference_###."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Free-boundary Problem with Bernoulli or Dirichlet free-boundary condition",
|
| 15 |
+
"text": "We investigate the free boundary problem with either the Bernoulli condition or the Dirichlet condition on the free boundary. The Bernoulli condition is commonly used when considering steady, incompressible, and inviscid flow, but it is nonlinear, making the free-boundary problem more challenging to solve. To be general, the boundary conditions on the fixed boundaries are Robin boundary conditions."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Free-boundary Problem With Bernoulli Condition",
|
| 21 |
+
"text": "The free-boundary problem with a Bernoulli condition can be stated as seeking an unknown domain ( or even ), and a corresponding scalar potential function . For fluid problems, is then the velocity vector. The boundary contains a free boundary , and the remainder , for example in the two-dimensional open-channel flow case, contains a left boundary for inflow, a right boundary for outflow, and the bed which can have any reasonable shape. Figure 1 ###reference_### is an example of the domain and the parametrization of the free boundary .\nThe problem can be written as\nwhere is the normal derivative with being the unit normal vector to the boundary pointing out the domain, and is the -th component (vertical component) of vector . The condition (1a ###reference_1###) is the PDE for potential , where is a sufficiently smooth given function. The condition (1b ###reference_2###) represents the kinematic condition on the free boundary. The condition (1c ###reference_3###) with real-valued constants and represents the Bernoulli condition.222Because of (1b ###reference_2###), the Bernoulli condition (1c ###reference_3###) is a condition on , hence it can be thought of as a surface-eikonal equation [26 ###reference_b26###, 13 ###reference_b13###].\nIn the standard case, , is the gravitational acceleration, and , where is the external pressure and is the constant density of the fluid.\nWe consider general Robin boundary conditions (1d ###reference_4###) on where , and are sufficiently smooth given functions. Thus we can approximate either a Neumann or Dirichlet-type condition depending on the value of : the Neumann boundary condition, obtained when , usually represents the kinematic condition, where the perpendicular fluid velocity is zero on the free or solid boundary. On the other hand, choosing yields the Dirichlet boundary condition . Furthermore, it is possible to impose mixed boundary conditions by choosing various values of on different parts of the boundaries (e.g. , and ).\nWe assume that for suitable data , , and , there is a nontrivial and sufficiently-smooth solution pair . The wellposedness of a free-boundary problem is studied in, for example, [14 ###reference_b14###, 24 ###reference_b24###]. By introducing a vector field , the displacement of the free boundary with respect to the referenced boundary (of constant height ) can be defined as\nto parametrize the domain and the free boundary , as shown in Figure 1 ###reference_###. This allows us to think of the problem (1a ###reference_1###)\u2013(1d ###reference_4###) in terms of the solution pair .\nThe part of the free boundary corresponding to inflow is assumed to be fixed in our work. That means in the two-dimensional case that the left node on the free boundary is fixed, i.e. .\nThere is a compatibility requirement on the parameters defining the Bernoulli and Robin boundary conditions, (1c ###reference_3###) and (1d ###reference_4###). It is well-known that the Bernoulli condition prescribes the conservation of energy along streamlines, therefore the total energy prescribed by the Bernoulli condition should match the value of the total energy given by the Robin boundary condition at the corresponding upstream (and downstream) coordinates. For example, in the two-dimensional case with the geometry corresponding to the illustration in Fig.1 ###reference_###, the choice , , and are compatible for inflow angle and Froude number .\nFurther to the compatibility requirements in the previous remark, we will also require the following, which will ensure the linearised operator is well-posed (see Remark 6.2 ###reference_theorem2###):\nLet denote the unit vector at , tangent to and outward from , and normal to . We require to have on those parts of that do not touch a part of where a Dirichlet boundary condition is imposed, or where is fixed. An example situation for which this is satisfied is a flow in an open channel with non-homogeneous Neumann boundary condition on the lateral boundary and Dirichlet boundary condition on the outflow. An alternative situation is where the Dirichlet boundary condition holds on all over ."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Free-boundary Problem with Dirichlet Boundary Condition",
|
| 27 |
+
"text": "A more simple model problem is introduced by replacing the Bernoulli condition with the Dirichlet condition on the free boundary. The dependence on is now linear:\nwhere and are assumed to be sufficiently smooth on (e.g. on makes sense for the inflow).\nBy choosing on (i.e., Dirichlet boundary condition), this problem becomes the classical\nproblem for an ideal fluid, called the Bernoulli free-boundary problem [23 ###reference_b23###]."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "The Weak Form",
|
| 33 |
+
"text": "We will first find weak forms of both free-boundary problems in order to apply shape-calculus techniques to linearise these problems, and subsequently propose Newton-like schemes. To allow shape linearisation, the test functions will be more regular than what is usually assumed. Hence, let and be sufficiently smooth test functions.\nSince the only difference between the two free-boundary problems in Section 2 ###reference_### is the Bernoulli condition and the Dirichlet condition on the free boundary, the first weak form in the domain is the same in both situations. It can be obtained by multiplying the Laplacian equation ((1a ###reference_1###) or (3a ###reference_1###)) by the test function and integrating over , then applying the Green\u2019s formula with the Robin boundary conditions on and (homogeneous) Neumann boundary condition on , yielding\nwhere the semilinear form is defined as\nThe second weak form can be derived by multiplying the remaining free-boundary condition by the test function and integrating over ,\nwith the definition of the semilinear form as\nwhere can either be the left hand side of Bernoulli condition (1c ###reference_3###) or in case of the Dirichlet condition (3c ###reference_3###).\nGiven some approximation , the exact Newton method for an update , in weak form, would be\nWe now study the shape derivatives, which are present in the above left-hand side."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Shape Derivatives",
|
| 39 |
+
"text": "The linearisation of and needs the differentiation of the weak forms with respect to the geometry, where the geometry itself is treated as a variable. Thus the shape derivatives are applied to a given domain, which requires some appropriate smoothness assumptions.\nThe weak forms (5 ###reference_###) and (7 ###reference_###) contain domain integrals and boundary integrals . The shape derivatives for a domain integral and a boundary integral can be obtained by the Hadamard formulas [5 ###reference_b5###, 25 ###reference_b25###]:\nSuppose , where\nand is an open and bounded domain with boundary of class . Consider the domain integral\nThen its shape derivative with respect to the perturbation is given by333, see e.g., [5 ###reference_b5###, Chapter 9]. In practise, is only needed on , instead of on the whole . Any extension of into would suffice since does not depend on the particular extension used.\nwhere denotes the outward normal derivative to .\nSuppose , where\nand is an open and bounded domain with boundary of class . Consider the boundary integral\nThen its shape derivative with respect to the perturbation is given by3 ###reference_te3###\nwhere denotes the normal vector to and is the (additive) curvature of .\nThe shape derivative of boundary integral for the open boundary (see [34 ###reference_b34###, Eq. (5.48)]) is:\nwhere is defined in Remark 2.3 ###reference_theorem3###.\n\nWhen is piecewise smooth, additional jump terms should be included in the boundary-integral shape derivative; see e.g., [25 ###reference_b25###, Ch. 3.8]."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "5",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Linearisation",
|
| 45 |
+
"text": "The linearisation of and at an approximation pair close to the exact solution can be derived from the partial derivative of the weak forms with respect to and . We proceed formally when obtaining our linearisation: We assume that is any sufficiently regular approximation (in, say, ), close to , that lives in the approximate domain with a sufficiently smooth approximate free boundary (say, ) induced by the approximation .\nA key strategy in the derivation of our linearisation consists of the use of higher-order corrections to arrive at more convenient expressions: In particular, since is assumed to be close to , we will often use that satisfies the boundary conditions up to, say, ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5.1",
|
| 49 |
+
"parent_section_id": "5",
|
| 50 |
+
"section_name": "Linearisation of",
|
| 51 |
+
"text": "The Gteaux derivative at in the direction can be evaluated as\nThen the linearisation with respect to can be obtained by applying Hadamard formulas from Theorem 4.1 ###reference_theorem1### to (5 ###reference_###), assuming , which yields\nThe tangential gradient and tangential divergence satisfy\nBy substituting (11 ###reference_###) into (10 ###reference_###) and applying the tangential Green\u2019s identity [5 ###reference_b5###, 25 ###reference_b25###], (10 ###reference_###) can be approximated as 444The integral term over is missing in the formula in paper [32 ###reference_b32###].\nwhere, due to the Neumann boundary condition (1b ###reference_2###) (or (3b ###reference_2###)) and being close to , the related term is of higher order, hence it was neglected. We now use the announced compatibility conditions from Remark 2.1 ###reference_theorem1###-2.3 ###reference_theorem3### to remove the integral over ,\nFor Dirichlet condition case, (13 ###reference_###) can be written as\ndue to on the free boundary."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5.2",
|
| 55 |
+
"parent_section_id": "5",
|
| 56 |
+
"section_name": "Linearisation of with Dirichlet condition",
|
| 57 |
+
"text": "Considering first the Dirichlet boundary condition, we have\nSimilar to the linearisation of with respect to , it is straightforward to evaluate the Gteaux derivative at in the direction ,\nThen by using the Hadamard formula on the boundary integral (15 ###reference_###), assuming (and recall that ), we have the shape derivative\nUsing the Dirichlet condition (3c ###reference_3###) and Neumann condition (3b ###reference_2###) on the free boundary, we can neglect the term and term in (LABEL:R2_H), similar to the reasoning in Section 5.1 ###reference_###. We then have the approximation"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.3",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "Linearisation of with Bernoulli condition",
|
| 63 |
+
"text": "To perform the linearisation of the Bernoulli condition, we require more regularity on as well as the test function . It is sufficient to assume and .555The end result (21 ###reference_###) and (30 ###reference_###) of the linearisation indicates that these regularity requirements may be weakened, although this has not been pursued further.\nSubstituting the Bernoulli condition (1c ###reference_3###) into the weak form (7 ###reference_###), we have\nThe linearisation in terms of at approximation is\nwhere the normal component was neglected, similar to the reasoning in Section 5.1 ###reference_###.\nTo find the Gteaux derivative with respect to at , the Hadamard formula yields\nAccording to the Bernoulli condition (1c ###reference_3###) and being close to , is close to , similar to what we did in Section 5.1 ###reference_###, the approximation is therefore\nwhere is the -coordinate (vertical coordinate) of the unit normal vector . In the two-dimensional case, the value of corresponds to the -component of . However, in dimensions, represents the -th component.\nWe now look more closely at the term ."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5.3.1",
|
| 67 |
+
"parent_section_id": "5.3",
|
| 68 |
+
"section_name": "5.3.1 dimensional case",
|
| 69 |
+
"text": "We first continue assuming the general case in dimensions, and we will look into the two-dimensional case later for convenience to the reader.\nWe introduce the index form of by\nsuch that the Neumann boundary condition (1b ###reference_2###) can be rewritten in the form\nwhere we employ the Einstein summation convention.\nTaking the tangential gradient gives:\nWe define the tangential gradient and the matrix as in [7 ###reference_b7###]\nAccording to the definition of tangential gradient, we have\nwhere the second step is obtained by using Neumann boundary condition (1b ###reference_2###). Since the is the tangential gradient of which is only defined on the free surface , it can be extended as a constant beyond the surface such that its normal derivative is zero. Hence,\nThe equation (24 ###reference_###) is equivalent to\nBy using (28 ###reference_###) and (29 ###reference_###), we have\nwhere is the extended Weingarten map [7 ###reference_b7###], which is a tensor containing curvature type quantities. In particular, the trace of coincides with the summed curvature [34 ###reference_b34###, Sec. 4.5.2].\nSubstituting (30 ###reference_###) into (21 ###reference_###), the (approximate) shape linearisation in the -dimensional case becomes"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.3.2",
|
| 73 |
+
"parent_section_id": "5.3",
|
| 74 |
+
"section_name": "5.3.2 Three dimensional case",
|
| 75 |
+
"text": "For the three-dimensional case, let and be the principle curvatures. The matrix has eigenvalues and corresponding normalised eigenvectors [34 ###reference_b34###, Sec. 4.5.2]. Since is symmetric [34 ###reference_b34###, 7 ###reference_b7###], by the spectral decomposition theorem,\nwhere\nHence,\nwhere the last step is obtained because and are orthonormal [34 ###reference_b34###, Sec. 3.2.4].\nSubstituting (32 ###reference_###) into (21 ###reference_###), the approximate shape linearisation (up to higher-order terms) in the 3-D case becomes\nwhere is the -component of the unit normal vector ."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.3.3",
|
| 79 |
+
"parent_section_id": "5.3",
|
| 80 |
+
"section_name": "5.3.3 Two dimensional case",
|
| 81 |
+
"text": "For convenience to the reader, now we look into a specific case of the linerisation of , namely in two dimensions using Cartesian coordinates, which is more direct. In this case, we assume that we can introduce as the vertical displacement of the free surface with respect to the referenced free surface, the horizontal -axis, such that .\nGiven the approximation , we have the unit normal vector and the unit tangential vector . Then the Neumann boundary condition (1b ###reference_2###) on the free boundary can be written in the form of\nThis implies that its tangential derivative is also zero, i.e.\nwhich is equivalent to\nThen we have\nwhere , which is the curvature. The second and last steps are obtained by substituting the Neumann condition, and the third step is obtained by substitution of (34 ###reference_###).\nIn the two dimensional case, we have\nwhere By substituting (36 ###reference_###) into (30 ###reference_###) and using the Neumann condition (1b ###reference_2###), (35 ###reference_###) is consistent with (30 ###reference_###). The details can be found in Appendix A ###reference_###.\nOn substitution from (35 ###reference_###) into (21 ###reference_###), the approximate shape linearisation in the 2-D case is"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "6",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Newton-Like Schemes",
|
| 87 |
+
"text": "Next, we use the linearisations in the previous section to construct Newton-like schemes. We first consider the general case in .\nWe introduce and , where and are the corrections to , which generates the domain with free boundary , and , respectively.666The inclusion is meant in the sense that each has a (non-unique) extension onto , which is in .\nIn each iteration, a reference free boundary is updated, and thereby the reference domain . The exact Newton method for , in weak form, would be\nInstead, we obtain more convenient Newton-like schemes by using the higher-order corrections of Section 5 ###reference_### to the above derivatives.777In particular, when and are the exact solutions , the Newton-like schemes coincides with the exact Newton scheme.\nWe subsequently provide a strong form interpretation of the scheme."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6.1",
|
| 91 |
+
"parent_section_id": "6",
|
| 92 |
+
"section_name": "Weak form of the problem with Dirichlet Boundary condition",
|
| 93 |
+
"text": "The Newton-like equation for is obtained by combining (9 ###reference_###) and the approximation (14 ###reference_###) of , i.e.,\nFor the Dirichlet boundary condition, the Newton-like equation for is derived based on (16 ###reference_###) and approximation (18 ###reference_###) as"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "6.2",
|
| 97 |
+
"parent_section_id": "6",
|
| 98 |
+
"section_name": "Weak form of the problem with Bernoulli Boundary condition",
|
| 99 |
+
"text": "The Newton-like equation for is obtained by combining (9 ###reference_###) and the approximation (13 ###reference_###) of , i.e.,\nFor the Bernoulli condition, introducing (20 ###reference_###), (21 ###reference_###) and (30 ###reference_###), the Newton-like equation for is"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6.3",
|
| 103 |
+
"parent_section_id": "6",
|
| 104 |
+
"section_name": "Strong form: General free-boundary perturbations",
|
| 105 |
+
"text": "1.\n\nInitialize with ; set .\n\n\n2.\n\nGiven , solve the linear coupled problem for :\n\n\n\n(41a)\n\n\n\n\n(41b)\n\n\n\n\n(41c)\n\n\n\n\n(41d)\n\n\n\n\n\n\n\n\n\n3.\n\nUpdate the free boundary displacement and potential as\n\n\n\n\n\n\n\n\n\n\n\n\n\n4.\n\nUpdate the free boundary (hence the domain) as\n\n\n\n\n\n\n\n\n \n\nThen repeat from step 2 with until convergence.\nIt is important to provide a strong form interpretation of the Newton-like scheme, so that the linearised equations can be used by methods that don\u2019t use weak forms. Furthermore, the strong form provides further insight and a starting point for analysis.\nIn the Dirichlet case, the strong form problem for extracted from (39a ###reference_.1###)-(39b ###reference_.2###) is:888We note that [32 ###reference_b32###, Section 4.1] has several typos in the strong from of the linearised system. Equations (42a ###reference_.1###)\u2013(42d ###reference_.4###) are correct versions for the case in [32 ###reference_b32###, Section 4.1] with vanishing Neumann data (i.e., set their ).\nwhile in the case of the Bernoulli condition, the strong form problem for extracted from (40a ###reference_.1###)-(40b ###reference_.2###) is:\nThe iterative algorithm associated to (43a ###reference_.1###)\u2013(43d ###reference_.4###) is given in Table 1 ###reference_###. The solutions are updated as\nwhere\n is such that satisfies the problem (41a ###reference_.1###)\u2013(41d ###reference_.4###) (while its tangential component is free to specify).\nAccordingly, the free boundary is updated as .\nOne can write the linearized system in mixed total/update form, which solves for the variables , instead of . This can be particularly helpful to remove any dependencies on (which lives on the previous domain , hence would need a suitable extension onto ); see [32 ###reference_b32###, Remark 5] where, in case of the Dirichlet boundary condition, the dependence on is shown to be completely eliminated.\nBoth linearized systems (42 ###reference_###) and (43 ###reference_###) can be shown to have a unique weak solution under certain conditions of the data. We have presented the details of these well-posedness analyses in Appendix B ###reference_### and C ###reference_###, for (42 ###reference_###) and (43 ###reference_###) respectively. In both cases, the analysis establishes coercivity of a bilinear form for a weak formulation for the variable , obtained by eliminating the variable from the system.\nIn the case of system (42 ###reference_###), the bilinear form corresponds to that of a Laplacian with a generalized Robin boundary condition involving an oblique derivative. Such problems have been analyzed in, e.g., [22 ###reference_b22###, 29 ###reference_b29###, 33 ###reference_b33###].\nIn the case of system (43 ###reference_###), the bilinear corresponds to that of a Laplacian with a generalized Robin boundary condition involving a surface Laplacian (Laplace\u2013Beltrami operator). Such problems have been analyzed in, e.g., [8 ###reference_b8###, 1 ###reference_b1###, 2 ###reference_b2###]."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "6.4",
|
| 109 |
+
"parent_section_id": "6",
|
| 110 |
+
"section_name": "Strong form: Vertical free-boundary perturbations",
|
| 111 |
+
"text": "1.\n\nInitialize with ; set .\n\n\n2.\n\nGiven , solve the free boundary problem\n\n\n\n(44a)\n\n\n\n\n(44b)\n\n\n\n\n\n\n\n\n(44d)\n\n\n\n\nfor , where .\n\n\n3.\n\nUpdate the free boundary displacement and potential as\n\n\n\n\n\n\n\n\n\n\n\n\n\n4.\n\nUpdate the free boundary (hence the domain) as\n\n\n\n\n\n\n\n\n \n\nThen repeat from step 2 with until convergence.\nA particular scenario arises in a two-dimensional case, where the free boundary is adjusted vertically such that .\nIn that case, we have such that\nwhere is the arc length and . The boundary integrals can be evaluated in a referenced domain along the direction, and this problem can be solved in terms of the pair . The algorithm is now displayed as Table 2 ###reference_###, and the geometry is updated vertically with ."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "7",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "Numerical experiments",
|
| 117 |
+
"text": "###figure_1### ###figure_2### ###figure_3### ###figure_4### Next we present numerical experiments in 2D.\nWe start with a straightforward test case for the Dirichlet boundary condition problem and then focus on the submerged triangle problem. The first test case is also a Bernoulli free-boundary problem simplified from the submerged triangle problem, with a Dirichlet condition on both the fixed and free boundary. The submerged triangle problem is the problem to which we are mainly interested in applying this shape-Newton scheme. We will use the algorithm in Table 2 ###reference_### such that the displacement of the free boundary is updated vertically.\nWe use a finite element method, based on the weak form of the linearized system in Table 2 ###reference_###. That is, we seek such that\nwhere and are finite element spaces based on a quasi-uniform partition (triangulation) of into a set of shape-regular simplicial elements . In particular, we choose continuous piecewise-linear approximations, i.e., and , where the (line) elements in correspond to the free-boundary edges of the (triangular) elements in adjacent to . The space incorporates the condition at the inflow of the free boundary (recall Remark 2.1 ###reference_theorem1###).\nNotice that in the 2D case,\non the free surface , where represents the arc length along the free surface, which allows us to write (46 ###reference_###) in terms of .\nGiven a new , the free boundary is updated by moving the mesh nodes on vertically with the distance . The other mesh nodes are then updated accordingly to yield a smoothly deformed mesh. In particular, we update the other mesh nodes simply by moving vertically using a linearly-interpolated fraction of the distance at the same -coordinate. Further implementation details can be found in Section 6.6.2 in [12 ###reference_b12###].\nFor simplicity, the curvature along the free surface is evaluated by a finite difference approximation. An alternative to this approximation of is to obtain the linearisation directly from piecewise smooth free boundaries (which we have not pursued in this work); cf. Remark 4.4 ###reference_theorem4###.\nUnder certain conditions of the mesh and data, the solvability of the discrete shape-Newton schemes for Bernoulli boundary conditions has been proven in Appendix D ###reference_###."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "7.1",
|
| 121 |
+
"parent_section_id": "7",
|
| 122 |
+
"section_name": "Dirichlet boundary condition",
|
| 123 |
+
"text": "The test case for the free-boundary problem with Dirichlet boundary condition is a Bernoulli free-boundary problem derived from a manufactured solution,\nsuch that the data can be obtained as\nWith an initial domain , how the domain and the triangulation changes in the first three iterations are shown in Figure 2 ###reference_###. Starting with a parabola, the free boundary is almost a straight line after the third iteration. The source term has been tested in [32 ###reference_b32###] by choosing a more complicated manufactured solution.\nFigure 3 ###reference_### shows the error between numerical results of and compared with the exact solution (47 ###reference_###) on the free boundary with a different number of finite element meshes. The value of represents the number of nodes along the -axis, and the number of nodes along the -axis is . Although the error is slightly larger with more nodes, the shape-Newton scheme converges superlinearly.\n###figure_5###"
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "7.2",
|
| 127 |
+
"parent_section_id": "7",
|
| 128 |
+
"section_name": "The submerged triangle problem",
|
| 129 |
+
"text": "The second test case is the submerged triangle problem investigated by Dias and Vanden-Broeck [6 ###reference_b6###]. A detailed derivation of the governing equations can be found in [12 ###reference_b12###, Appendix]. In this problem, we have a Neumann boundary condition on and a Dirichlet boundary condition on , i.e. on and on . The data defining this problem is given as follows:\nThe Bernoulli condition is obtained by giving , and where is the Froude number. The domain is a rectangle truncated at containing an isosceles triangle symmetric about having an angle and width at the bottom, as shown in Figure 4 ###reference_###. The space is discretised as shown in Figure 5 ###reference_###, where it was uniformly spaced along the axis and the vertical direction for fixed values of . Then the algorithm in Table 2 ###reference_### can be applied to solve for the pair , and the free boundary can be updated vertically with .\n###figure_6### Dias and Vanden-Broeck [6 ###reference_b6###] found that the solutions to the submerged problem have two types: One is supercritical flow both upstream and downstream, and the other is supercritical (or subcritical) upstream and subcritical (or supercritical) downstream flow. Our numerical solutions are the first type, and we can compare them with the results in [6 ###reference_b6###]."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "7.2.1",
|
| 133 |
+
"parent_section_id": "7.2",
|
| 134 |
+
"section_name": "7.2.1 Convergence rate of Shape-Newton method",
|
| 135 |
+
"text": "The rate of convergence is shown in Figure 6 ###reference_###, where we show and against the number of iterations for , and . These show superlinear convergence. This figure also shows the comparison for different mesh densities.\n###figure_7###"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "7.2.2",
|
| 139 |
+
"parent_section_id": "7.2",
|
| 140 |
+
"section_name": "7.2.2 Robustness of the Shape-Newton scheme",
|
| 141 |
+
"text": "###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### Some converged grids of the whole region are shown in Figure 7 ###reference_###. We noticed that has a maximum value at on the free boundary, and the value of changes with the values of , and . Figure 8 ###reference_### shows the value of against the Froude number for various values of . We can observe from Figure 8 ###reference_### that will decrease when the Froude number becomes larger for the fixed width of the triangle. In addition, for fixed values of and angle , will also decrease with the width of the triangle. This agrees with the results presented by Dias and Vanden-Broeck in [6 ###reference_b6###], who solved this problem for fixed . To improve convergence behaviour, we explored using a continuation technique in the Froude number F. However, as seen in Figure 8c ###reference_sf3### and Figure 8d ###reference_sf4###, for larger triangles (larger ), convergence generally becomes more difficult, as those test cases are closer to critical situations beyond which there is no solution. See [6 ###reference_b6###] for a detailed study on critical values.\nWe also found that the solutions are challenging for larger angle for fixed width. The possible reason is that with a higher triangle height, the flow can approach its limiting configuration as a thin layer over the edge of the triangle with a stagnation point, hence may require local mesh refinement."
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "8",
|
| 145 |
+
"parent_section_id": null,
|
| 146 |
+
"section_name": "Conclusion",
|
| 147 |
+
"text": "We derived a shape-Newton method to solve generic free-boundary problems with the nonlinear Bernoulli boundary condition. The linearised system is obtained from applying the Hadamard formula for shape derivatives to a suitable weak form of the free boundary problem. After linearisation and neglecting higher-order terms, one obtains a linear boundary-valued problem to be solved at each iteration.\nThe shape linearisation of the nonlinear Bernoulli equation is a key result in our work. In its derivation, many terms can be neglected (as a higher-order correction) due to the homogeneous Neumann boundary condition. After some calculations, we find that the result involves the normal derivative of the velocity squared, i.e. . This can be equivalently computed as , see Section 5.3 ###reference_###.\nThe linearised system essentially corresponds to a boundary value problem for the Laplacian with a generalized Robin boundary condition involving a surface Laplacian (Laplace\u2013Beltrami), which in turn depends on the curvature. Another key result in our work is a study of the solvability of this linearised system. Under certain conditions on the data, one can guarantee the existence of a unique solution (details in Appendix C ###reference_###).\nWe applied our method to compute the flow over a submerged triangle for a range of Froude numbers and triangle shapes, and obtained consistent results with the earlier literature [6 ###reference_b6###]. Moreover, the numerical test revealed that the shape-Newton method converges superlinearly. A theoretical explanation of this behaviour remains an open problem."
|
| 148 |
+
}
|
| 149 |
+
],
|
| 150 |
+
"appendix": [
|
| 151 |
+
{
|
| 152 |
+
"section_id": "Appendix 1",
|
| 153 |
+
"parent_section_id": null,
|
| 154 |
+
"section_name": "Appendix A Consistency of (35) with (30)",
|
| 155 |
+
"text": "In this appendix, we will show the detail about consistency between the result (30 ###reference_###) for the -dimensional case and the result (35 ###reference_###) for the two-dimensional case.\nIn the two-dimensional case, we have the unit normal vector . Thus,\nwhere represents the curvature. Hence,\nBy substituting (48 ###reference_###) and (50 ###reference_###) into the definition of (11 ###reference_###), we obtain\nNow by using (51 ###reference_###) and the Neumann boundary condition (1b ###reference_2###), we have\nHence, (30 ###reference_###) in the two-dimensional case equals (35 ###reference_###)."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"section_id": "Appendix 2",
|
| 159 |
+
"parent_section_id": null,
|
| 160 |
+
"section_name": "Appendix B Solvability of the shape-linearized system for the Dirichlet boundary condition",
|
| 161 |
+
"text": "In this Appendix we show that, under certain conditions of the data, the shape-linearized system (42a ###reference_.1###)\u2013(42d ###reference_.4###) for the free-boundary problem with Dirichlet boundary condition (i.e., (3a ###reference_1###)\u2013(3d ###reference_4###)), has a unique solution.\nFrom the Dirichlet boundary condition (42d ###reference_.4###), we have\nprovided . Note that for the case when is a constant and , this problem has been shown to have a unique solution; see, e.g. [32 ###reference_b32###].\nSubstituting (52 ###reference_###) into (42c ###reference_.3###), the system becomes one-way coupled, i.e., a boundary-value problem for , and subsequently an equation for :\nThe boundary-value problem for has essentially a generalized Robin boundary condition involving an oblique derivative () on . To guarantee existence of a unique weak solution to this boundary-value problem, we use the Lax-Milgram theorem and establish coercivity of a suitable bilinear form. We assume is a bounded Lipschitz-continuous domain, and is sufficiently smooth to ensure continuity of the bilinear form and linear form of the weak formulation.\nA weak form of (53a ###reference_.1###)\u2013(53c ###reference_.3###) seeks such that\nwhere\nTo study the coercivity of , note that\nhence, if the last two terms are nonnegative, coercivity holds when (by a Poincar\u00e9\u2013Steklov inequality; see, e.g., [11 ###reference_b11###, Eq. (31.23)]). We note that the penultimate term can be written as\nTherefore, sufficient conditions that guarantee coercivity are:\nand a closed free boundary (hence ) or .\nThese conditions then guarantee that while follows from (52 ###reference_###). Generally, one expects additional regularity for beyond , so that inherits this regularity and becomes a Lipschitz-continuous vector field on . Such regularity study is outside the scope of this work."
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"section_id": "Appendix 3",
|
| 165 |
+
"parent_section_id": null,
|
| 166 |
+
"section_name": "Appendix C Solvability of the shape-linearized system for the Bernoulli boundary condition",
|
| 167 |
+
"text": "In this Appendix we show that, under certain conditions of the data, the shape-linearized system (43 ###reference_###) for the free-boundary problem with Bernoulli boundary condition (i.e., (1a ###reference_1###)\u2013(1d ###reference_4###)), has a unique solution.\nThe linearized Bernoulli condition (43d ###reference_.4###) can be rearranged to:\nwhere and , provided that .\nLet and for notation convenience, (58 ###reference_###) can be rewritten as\nSimilar to the approach in Appendix B ###reference_###, by substituting (59 ###reference_###) into (43c ###reference_.3###), we obtain a boundary-value problem for :\nThis is essentially a problem for the Laplacian\nwith a generalized Robin boundary condition on involving a surface Laplacian (Laplace\u2013Beltrami operator). Again, to guarantee existence of a unique weak solution to this boundary-value problem, we use the Lax-Milgram theorem and establish coercivity of a suitable bilinear form. We assume is a bounded Lipschitz-continuous domain, and is sufficiently smooth to ensure continuity of the bilinear form and linear form of the weak formulation.\nA weak form of (60a ###reference_.1###)\u2013(60c ###reference_.3###) seeks such that\nwhere\nTo study the coercivity of , note that\nhence, similar to the approach in Appendix B ###reference_###, if the last three terms are suitably bounded, coercivity holds. We note that the last two terms can be written as\nTherefore, sufficient conditions that guarantee coercivity are:\nThe same remark at the end of Appendix B ###reference_### applies: The above conditions guarantee existence of while follows from (58 ###reference_###). Generally, one expects additional regularity for beyond , so that inherits this regularity and becomes a Lipschitz-continuous vector field on . Such regularity study is outside the scope of this work."
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"section_id": "Appendix 4",
|
| 171 |
+
"parent_section_id": null,
|
| 172 |
+
"section_name": "Appendix D Solvability of the discrete shape-linearized system for the Bernoulli condition",
|
| 173 |
+
"text": "In this Appendix we show that, under certain conditions of the data and mesh, the finite element method for the shape-linearized system (46 ###reference_###) has a unique discrete solution. To that end, we use the Lax-Milgram theorem and establish coercivity of the coupled system in (46 ###reference_###).999We note that the proof of coercivity in the continuous setting, see Appendix C ###reference_###, does not apply in the discrete case, because in the continuous setting the geometrical variable could be straightforwardly eliminated from the system. We therefore establish coercivity separately in the discrete case.\nThe discrete problem (46 ###reference_###) can be written as follows:\nwhere\nTo study the coercivity of , note that\nNext, we bound the coupling terms in (68a ###reference_.1###) and (68b ###reference_.2###). Let and , then\nAccording to discrete trace inequalities [10 ###reference_b10###, Ch. 12.2], there are constants (independent of ) such that\nwhere .\nUsing these inequalities as well as Young\u2019s inequality (see, e.g., [11 ###reference_b11###, Appendix C.3]), we obtain\nNext, using Poincare inequality,\nAnd substitution from (70 ###reference_###)-(72 ###reference_###) into (68a ###reference_.1###) and (68b ###reference_.2###), we have\nHence, we obtain the estimate\nTherefore, sufficient conditions that guarantee discrete solvability are:"
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"section_id": "Appendix x1",
|
| 177 |
+
"parent_section_id": null,
|
| 178 |
+
"section_name": "Acknowledgments",
|
| 179 |
+
"text": "The authors are grateful to Anna Kalogirou and Onno Bokhove for additional discussion. The authors would also like to thank the anonymous reviewers for their helpful comments and suggestions, which led to many significant improvements, in particular the addition of Section 5.3.2 ###reference_.SSS2### (on 3-D equivalent expression) and Remarks 6.2 ###reference_theorem2### and 7.3 ###reference_theorem3###, and corresponding Appendices (on solvability of the continuous and discrete linearised systems)."
|
| 180 |
+
}
|
| 181 |
+
],
|
| 182 |
+
"tables": {
|
| 183 |
+
"1": {
|
| 184 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T1\">\n<figcaption class=\"ltx_caption\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>The coupled shape-Newton scheme solving for using a linearised Bernoulli boundary condition\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.14254v2#S6.E41.4\" title=\"Equation 41d \u2023 Equation 41 \u2023 Item 2 \u2023 Table 1 \u2023 6.3 Strong form: General free-boundary perturbations \u2023 6 Newton-Like Schemes \u2023 A Shape-Newton Method for Free-boundary Problems Subject to The Bernoulli Boundary Condition\"><span class=\"ltx_text ltx_ref_tag\">41d</span></a>) on the free boundary. For the linearised Dirichlet boundary condition on the free boundary, replace\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.14254v2#S6.E41.4\" title=\"Equation 41d \u2023 Equation 41 \u2023 Item 2 \u2023 Table 1 \u2023 6.3 Strong form: General free-boundary perturbations \u2023 6 Newton-Like Schemes \u2023 A Shape-Newton Method for Free-boundary Problems Subject to The Bernoulli Boundary Condition\"><span class=\"ltx_text ltx_ref_tag\">41d</span></a>) by\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2305.14254v2#S6.E42.4\" title=\"Equation 42d \u2023 Equation 42 \u2023 6.3 Strong form: General free-boundary perturbations \u2023 6 Newton-Like Schemes \u2023 A Shape-Newton Method for Free-boundary Problems Subject to The Bernoulli Boundary Condition\"><span class=\"ltx_text ltx_ref_tag\">42d</span></a>).</figcaption>\n<p class=\"ltx_p\" id=\"S6.T1.9\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle ltx_framed ltx_framed_rectangle\" id=\"S6.T1.9.1\" style=\"width:411.9pt;\">\n<span class=\"ltx_enumerate\" id=\"S6.I1\">\n<span class=\"ltx_item\" id=\"S6.I1.i1\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\">1.</span>\n<span class=\"ltx_para\" id=\"S6.I1.i1.p1\">\n<span class=\"ltx_p\" id=\"S6.I1.i1.p1.2\"><span class=\"ltx_text\" id=\"S6.I1.i1.p1.2.1\" style=\"font-size:80%;\">Initialize with </span><span class=\"ltx_text\" id=\"S6.I1.i1.p1.2.2\" style=\"font-size:80%;\">; set </span><span class=\"ltx_text\" id=\"S6.I1.i1.p1.2.3\" style=\"font-size:80%;\">.</span>\n<br class=\"ltx_break\"/></span>\n</span></span>\n<span class=\"ltx_item\" id=\"S6.I1.i2\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\">2.</span>\n<span class=\"ltx_para\" id=\"S6.I1.i2.p1\">\n<span class=\"ltx_p\" id=\"S6.I1.i2.p1.2\"><span class=\"ltx_text\" id=\"S6.I1.i2.p1.2.1\" style=\"font-size:80%;\">Given </span><span class=\"ltx_text\" id=\"S6.I1.i2.p1.2.2\" style=\"font-size:80%;\">, solve the linear coupled problem for </span><span class=\"ltx_text\" id=\"S6.I1.i2.p1.2.3\" style=\"font-size:80%;\">:</span></span>\n<span class=\"ltx_equationgroup ltx_eqn_table\" id=\"S6.E41\">\n<span><span class=\"ltx_eqn_row\" id=\"Ax1.EGx25\"><span class=\"ltx_eqn_cell\" colspan=\"4\"></span></span>\n<span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\" id=\"S6.E41.1\">\n<span class=\"ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_left\" rowspan=\"1\"><span class=\"ltx_tag ltx_tag_equation ltx_align_left\">(41a)</span></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span>\n<span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\" id=\"S6.E41.2\">\n<span class=\"ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_left\" rowspan=\"1\"><span class=\"ltx_tag ltx_tag_equation ltx_align_left\">(41b)</span></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span>\n<span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\" id=\"S6.E41.3\">\n<span class=\"ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_left\" rowspan=\"1\"><span class=\"ltx_tag ltx_tag_equation ltx_align_left\">(41c)</span></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span>\n<span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\" id=\"S6.E41.4\">\n<span class=\"ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_left\" rowspan=\"1\"><span class=\"ltx_tag ltx_tag_equation ltx_align_left\">(41d)</span></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span>\n<span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\" id=\"S6.E41.x1\">\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span></span>\n</span>\n</span></span>\n<span class=\"ltx_item\" id=\"S6.I1.i3\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\">3.</span>\n<span class=\"ltx_para\" id=\"S6.I1.i3.p1\">\n<span class=\"ltx_p\" id=\"S6.I1.i3.p1.1\"><span class=\"ltx_text\" id=\"S6.I1.i3.p1.1.1\" style=\"font-size:80%;\">Update the free boundary displacement and potential as</span></span>\n<span class=\"ltx_equationgroup ltx_eqn_align ltx_eqn_table\" id=\"Ax1.EGx26\">\n<span id=\"S6.Ex2\"><span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\">\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_td ltx_align_left ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span></span>\n<span id=\"S6.Ex3\"><span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\">\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_td ltx_align_left ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span></span>\n</span>\n</span></span>\n<span class=\"ltx_item\" id=\"S6.I1.i4\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\">4.</span>\n<span class=\"ltx_para\" id=\"S6.I1.i4.p1\">\n<span class=\"ltx_p\" id=\"S6.I1.i4.p1.1\"><span class=\"ltx_text\" id=\"S6.I1.i4.p1.1.1\" style=\"font-size:80%;\">Update the free boundary (hence the domain) as</span></span>\n<span class=\"ltx_equationgroup ltx_eqn_align ltx_eqn_table\" id=\"Ax1.EGx27\">\n<span id=\"S6.Ex4\"><span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\">\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_td ltx_align_left ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span></span>\n</span>\n</span></span>\n<span class=\"ltx_item\" id=\"S6.I1.ix1\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\">\u00a0</span>\n<span class=\"ltx_para\" id=\"S6.I1.ix1.p1\">\n<span class=\"ltx_p\" id=\"S6.I1.ix1.p1.1\"><span class=\"ltx_text\" id=\"S6.I1.ix1.p1.1.1\" style=\"font-size:80%;\">Then repeat from step\u00a02\u00a0with </span><span class=\"ltx_text\" id=\"S6.I1.ix1.p1.1.2\" style=\"font-size:80%;\"> until convergence.</span></span>\n</span></span>\n</span>\n</span><span class=\"ltx_text\" id=\"S6.T1.9.2\" style=\"font-size:80%;\"></span></p>\n</figure>",
|
| 185 |
+
"capture": "Table 1: The coupled shape-Newton scheme solving for using a linearised Bernoulli boundary condition\u00a0(41d) on the free boundary. For the linearised Dirichlet boundary condition on the free boundary, replace\u00a0(41d) by\u00a0(42d)."
|
| 186 |
+
},
|
| 187 |
+
"2": {
|
| 188 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<figcaption class=\"ltx_caption\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The coupled shape-Newton scheme for . </figcaption>\n<p class=\"ltx_p\" id=\"S6.T2.6\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle ltx_framed ltx_framed_rectangle\" id=\"S6.T2.6.1\" style=\"width:411.9pt;\">\n<span class=\"ltx_enumerate\" id=\"S6.I2\">\n<span class=\"ltx_item\" id=\"S6.I2.i1\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\">1.</span>\n<span class=\"ltx_para\" id=\"S6.I2.i1.p1\">\n<span class=\"ltx_p\" id=\"S6.I2.i1.p1.2\"><span class=\"ltx_text\" id=\"S6.I2.i1.p1.2.1\" style=\"font-size:80%;\">Initialize with </span><span class=\"ltx_text\" id=\"S6.I2.i1.p1.2.2\" style=\"font-size:80%;\">; set </span><span class=\"ltx_text\" id=\"S6.I2.i1.p1.2.3\" style=\"font-size:80%;\">.</span>\n<br class=\"ltx_break\"/></span>\n</span></span>\n<span class=\"ltx_item\" id=\"S6.I2.i2\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\">2.</span>\n<span class=\"ltx_para\" id=\"S6.I2.i2.p1\">\n<span class=\"ltx_p\" id=\"S6.I2.i2.p1.1\"><span class=\"ltx_text\" id=\"S6.I2.i2.p1.1.1\" style=\"font-size:80%;\">Given </span><span class=\"ltx_text\" id=\"S6.I2.i2.p1.1.2\" style=\"font-size:80%;\">, solve the free boundary problem</span></span>\n<span class=\"ltx_equationgroup ltx_eqn_table\" id=\"S6.E44\">\n<span><span class=\"ltx_eqn_row\" id=\"Ax1.EGx30\"><span class=\"ltx_eqn_cell\" colspan=\"4\"></span></span>\n<span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\" id=\"S6.E44.1\">\n<span class=\"ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_left\" rowspan=\"1\"><span class=\"ltx_tag ltx_tag_equation ltx_align_left\">(44a)</span></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_eqn_cell ltx_align_center\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span>\n<span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\" id=\"S6.E44.2\">\n<span class=\"ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_left\" rowspan=\"1\"><span class=\"ltx_tag ltx_tag_equation ltx_align_left\">(44b)</span></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_eqn_cell ltx_align_center\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span>\n<span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\" id=\"S6.E44.x1\">\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_eqn_cell ltx_align_center\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span>\n<span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\" id=\"S6.E44.4\">\n<span class=\"ltx_eqn_cell ltx_eqn_eqno ltx_align_middle ltx_align_left\" rowspan=\"1\"><span class=\"ltx_tag ltx_tag_equation ltx_align_left\">(44d)</span></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_eqn_cell ltx_align_center\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span></span>\n</span>\n<span class=\"ltx_p\" id=\"S6.I2.i2.p1.3\"><span class=\"ltx_text\" id=\"S6.I2.i2.p1.3.1\" style=\"font-size:80%;\">for </span><span class=\"ltx_text\" id=\"S6.I2.i2.p1.3.2\" style=\"font-size:80%;\">, where </span><span class=\"ltx_text\" id=\"S6.I2.i2.p1.3.3\" style=\"font-size:80%;\">.</span>\n<br class=\"ltx_break\"/></span>\n</span></span>\n<span class=\"ltx_item\" id=\"S6.I2.i3\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\">3.</span>\n<span class=\"ltx_para\" id=\"S6.I2.i3.p1\">\n<span class=\"ltx_p\" id=\"S6.I2.i3.p1.1\"><span class=\"ltx_text\" id=\"S6.I2.i3.p1.1.1\" style=\"font-size:80%;\">Update the free boundary displacement and potential as</span></span>\n<span class=\"ltx_equationgroup ltx_eqn_align ltx_eqn_table\" id=\"Ax1.EGx31\">\n<span id=\"S6.Ex2b\"><span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\">\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_td ltx_align_left ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span></span>\n<span id=\"S6.Ex3a\"><span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\">\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_td ltx_align_left ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span></span>\n</span>\n</span></span>\n<span class=\"ltx_item\" id=\"S6.I2.i4\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\">4.</span>\n<span class=\"ltx_para\" id=\"S6.I2.i4.p1\">\n<span class=\"ltx_p\" id=\"S6.I2.i4.p1.1\"><span class=\"ltx_text\" id=\"S6.I2.i4.p1.1.1\" style=\"font-size:80%;\">Update the free boundary (hence the domain) as</span></span>\n<span class=\"ltx_equationgroup ltx_eqn_align ltx_eqn_table\" id=\"Ax1.EGx32\">\n<span id=\"S6.Ex4a\"><span class=\"ltx_equation ltx_eqn_row ltx_align_baseline\">\n<span class=\"ltx_eqn_cell ltx_eqn_center_padleft\"></span>\n<span class=\"ltx_td ltx_align_right ltx_eqn_cell\"></span>\n<span class=\"ltx_td ltx_align_left ltx_eqn_cell\"></span>\n<span class=\"ltx_eqn_cell ltx_eqn_center_padright\"></span></span></span>\n</span>\n</span></span>\n<span class=\"ltx_item\" id=\"S6.I2.ix1\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\">\u00a0</span>\n<span class=\"ltx_para\" id=\"S6.I2.ix1.p1\">\n<span class=\"ltx_p\" id=\"S6.I2.ix1.p1.1\"><span class=\"ltx_text\" id=\"S6.I2.ix1.p1.1.1\" style=\"font-size:80%;\">Then repeat from step\u00a02 with </span><span class=\"ltx_text\" id=\"S6.I2.ix1.p1.1.2\" style=\"font-size:80%;\"> until convergence.</span></span>\n</span></span>\n</span>\n</span><span class=\"ltx_text\" id=\"S6.T2.6.2\" style=\"font-size:80%;\"></span></p>\n</figure>",
|
| 189 |
+
"capture": "Table 2: The coupled shape-Newton scheme for . "
|
| 190 |
+
}
|
| 191 |
+
},
|
| 192 |
+
"image_paths": {
|
| 193 |
+
"2(a)": {
|
| 194 |
+
"figure_path": "2305.14254v2_figure_2(a).png",
|
| 195 |
+
"caption": "(a) The initial domain and the triangulation.\nFigure 2: The initial domain and the change of the domain in three following Newton-like iterations. The free surface is updated vertically.",
|
| 196 |
+
"url": "http://arxiv.org/html/2305.14254v2/x1.png"
|
| 197 |
+
},
|
| 198 |
+
"2(b)": {
|
| 199 |
+
"figure_path": "2305.14254v2_figure_2(b).png",
|
| 200 |
+
"caption": "(b) The domain and the triangulation after the first iteration.\nFigure 2: The initial domain and the change of the domain in three following Newton-like iterations. The free surface is updated vertically.",
|
| 201 |
+
"url": "http://arxiv.org/html/2305.14254v2/x2.png"
|
| 202 |
+
},
|
| 203 |
+
"2(c)": {
|
| 204 |
+
"figure_path": "2305.14254v2_figure_2(c).png",
|
| 205 |
+
"caption": "(c) The domain and the triangulation after the second iteration.\nFigure 2: The initial domain and the change of the domain in three following Newton-like iterations. The free surface is updated vertically.",
|
| 206 |
+
"url": "http://arxiv.org/html/2305.14254v2/x3.png"
|
| 207 |
+
},
|
| 208 |
+
"2(d)": {
|
| 209 |
+
"figure_path": "2305.14254v2_figure_2(d).png",
|
| 210 |
+
"caption": "(d) The domain and the triangulation after the third iteration.\nFigure 2: The initial domain and the change of the domain in three following Newton-like iterations. The free surface is updated vertically.",
|
| 211 |
+
"url": "http://arxiv.org/html/2305.14254v2/x4.png"
|
| 212 |
+
},
|
| 213 |
+
"3": {
|
| 214 |
+
"figure_path": "2305.14254v2_figure_3.png",
|
| 215 |
+
"caption": "Figure 3: The Dirichlet error \u2016\u03d5\u2212h\u2016L2subscriptnormitalic-\u03d5\u210esubscript\ud835\udc3f2||\\phi-h||_{L_{2}}| | italic_\u03d5 - italic_h | | start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT and surface error \u2016\u03b7\u2212\u03b7^\u2016L2subscriptnorm\ud835\udf02^\ud835\udf02subscript\ud835\udc3f2||\\eta-\\hat{\\eta}||_{L_{2}}| | italic_\u03b7 - over^ start_ARG italic_\u03b7 end_ARG | | start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT on \u0393Fsubscript\u0393\ud835\udc39\\Gamma_{F}roman_\u0393 start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT measured in L\u221esubscript\ud835\udc3fL_{\\infty}italic_L start_POSTSUBSCRIPT \u221e end_POSTSUBSCRIPT-form against the number of iterations. The upper plot shows the Dirichlet error, and the lower shows the surface error. The values of N+1\ud835\udc411N+1italic_N + 1 are the number of the nodes along the x\ud835\udc65xitalic_x-axis.",
|
| 216 |
+
"url": "http://arxiv.org/html/2305.14254v2/x5.png"
|
| 217 |
+
},
|
| 218 |
+
"5": {
|
| 219 |
+
"figure_path": "2305.14254v2_figure_5.png",
|
| 220 |
+
"caption": "Figure 5: An example of the domain and the triangulation with \u03b1=\u03c04\ud835\udefc\ud835\udf0b4\\alpha=\\frac{\\pi}{4}italic_\u03b1 = divide start_ARG italic_\u03c0 end_ARG start_ARG 4 end_ARG, F=2\ud835\udc392F=2italic_F = 2 and the half width of the triangle w0=0.5subscript\ud835\udc6400.5w_{0}=0.5italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.5.",
|
| 221 |
+
"url": "http://arxiv.org/html/2305.14254v2/x6.png"
|
| 222 |
+
},
|
| 223 |
+
"6": {
|
| 224 |
+
"figure_path": "2305.14254v2_figure_6.png",
|
| 225 |
+
"caption": "Figure 6: The size of \u2016\u03b4\u2062\u03d5\u2016L2subscriptnorm\ud835\udeffitalic-\u03d5subscript\ud835\udc3f2||\\delta\\phi||_{L_{2}}| | italic_\u03b4 italic_\u03d5 | | start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT and \u2016\u03b4\u2062\u03b7\u2016L2subscriptnorm\ud835\udeff\ud835\udf02subscript\ud835\udc3f2||\\delta\\eta||_{L_{2}}| | italic_\u03b4 italic_\u03b7 | | start_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT on \u0393Fsubscript\u0393\ud835\udc39\\Gamma_{F}roman_\u0393 start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT against the number of iterations with \u03b1=\u03c08\ud835\udefc\ud835\udf0b8\\alpha=\\frac{\\pi}{8}italic_\u03b1 = divide start_ARG italic_\u03c0 end_ARG start_ARG 8 end_ARG, w0=0.3subscript\ud835\udc6400.3w_{0}=0.3italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.3 and F=3\ud835\udc393F=3italic_F = 3. The values of N+1\ud835\udc411N+1italic_N + 1 are the number of the nodes along the x\ud835\udc65xitalic_x-axis.",
|
| 226 |
+
"url": "http://arxiv.org/html/2305.14254v2/x7.png"
|
| 227 |
+
},
|
| 228 |
+
"7(a)": {
|
| 229 |
+
"figure_path": "2305.14254v2_figure_7(a).png",
|
| 230 |
+
"caption": "(a) The final domain for \u03b1=\u03c016\ud835\udefc\ud835\udf0b16\\alpha=\\frac{\\pi}{16}italic_\u03b1 = divide start_ARG italic_\u03c0 end_ARG start_ARG 16 end_ARG, w0=0.5subscript\ud835\udc6400.5w_{0}=0.5italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.5, and F=2\ud835\udc392F=2italic_F = 2.\nFigure 7: The final domains for various \u03b1\ud835\udefc\\alphaitalic_\u03b1, w0subscript\ud835\udc640w_{0}italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and F\ud835\udc39Fitalic_F, where their free boundaries are the numerical solutions.",
|
| 231 |
+
"url": "http://arxiv.org/html/2305.14254v2/x8.png"
|
| 232 |
+
},
|
| 233 |
+
"7(b)": {
|
| 234 |
+
"figure_path": "2305.14254v2_figure_7(b).png",
|
| 235 |
+
"caption": "(b) The final domain for \u03b1=\u03c08\ud835\udefc\ud835\udf0b8\\alpha=\\frac{\\pi}{8}italic_\u03b1 = divide start_ARG italic_\u03c0 end_ARG start_ARG 8 end_ARG, w0=0.5subscript\ud835\udc6400.5w_{0}=0.5italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.5, and F=1.4\ud835\udc391.4F=1.4italic_F = 1.4.\nFigure 7: The final domains for various \u03b1\ud835\udefc\\alphaitalic_\u03b1, w0subscript\ud835\udc640w_{0}italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and F\ud835\udc39Fitalic_F, where their free boundaries are the numerical solutions.",
|
| 236 |
+
"url": "http://arxiv.org/html/2305.14254v2/x9.png"
|
| 237 |
+
},
|
| 238 |
+
"7(c)": {
|
| 239 |
+
"figure_path": "2305.14254v2_figure_7(c).png",
|
| 240 |
+
"caption": "(c) The final domain for \u03b1=\u03c08\ud835\udefc\ud835\udf0b8\\alpha=\\frac{\\pi}{8}italic_\u03b1 = divide start_ARG italic_\u03c0 end_ARG start_ARG 8 end_ARG, w0=0.5subscript\ud835\udc6400.5w_{0}=0.5italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.5, and F=2\ud835\udc392F=2italic_F = 2.\nFigure 7: The final domains for various \u03b1\ud835\udefc\\alphaitalic_\u03b1, w0subscript\ud835\udc640w_{0}italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and F\ud835\udc39Fitalic_F, where their free boundaries are the numerical solutions.",
|
| 241 |
+
"url": "http://arxiv.org/html/2305.14254v2/x10.png"
|
| 242 |
+
},
|
| 243 |
+
"7(d)": {
|
| 244 |
+
"figure_path": "2305.14254v2_figure_7(d).png",
|
| 245 |
+
"caption": "(d) The final domain for \u03b1=\u03c08\ud835\udefc\ud835\udf0b8\\alpha=\\frac{\\pi}{8}italic_\u03b1 = divide start_ARG italic_\u03c0 end_ARG start_ARG 8 end_ARG, w0=1subscript\ud835\udc6401w_{0}=1italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 1, and F=2\ud835\udc392F=2italic_F = 2.\nFigure 7: The final domains for various \u03b1\ud835\udefc\\alphaitalic_\u03b1, w0subscript\ud835\udc640w_{0}italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT and F\ud835\udc39Fitalic_F, where their free boundaries are the numerical solutions.",
|
| 246 |
+
"url": "http://arxiv.org/html/2305.14254v2/x11.png"
|
| 247 |
+
},
|
| 248 |
+
"8(a)": {
|
| 249 |
+
"figure_path": "2305.14254v2_figure_8(a).png",
|
| 250 |
+
"caption": "(a) The maximum value y\ud835\udc66yitalic_y on the free boundary at x=0\ud835\udc650x=0italic_x = 0 against F\ud835\udc39Fitalic_F with w0=0.1subscript\ud835\udc6400.1w_{0}=0.1italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.1 for different values of \u03b1\ud835\udefc\\alphaitalic_\u03b1.\nFigure 8: The maximum value y0subscript\ud835\udc660y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT on the free boundary at x=0\ud835\udc650x=0italic_x = 0 against F\ud835\udc39Fitalic_F for different values of \u03b1\ud835\udefc\\alphaitalic_\u03b1 and w0subscript\ud835\udc640w_{0}italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 251 |
+
"url": "http://arxiv.org/html/2305.14254v2/x12.png"
|
| 252 |
+
},
|
| 253 |
+
"8(b)": {
|
| 254 |
+
"figure_path": "2305.14254v2_figure_8(b).png",
|
| 255 |
+
"caption": "(b) The maximum value y\ud835\udc66yitalic_y on the free boundary at x=0\ud835\udc650x=0italic_x = 0 against F\ud835\udc39Fitalic_F with w0=0.3subscript\ud835\udc6400.3w_{0}=0.3italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.3 for different values of \u03b1\ud835\udefc\\alphaitalic_\u03b1.\nFigure 8: The maximum value y0subscript\ud835\udc660y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT on the free boundary at x=0\ud835\udc650x=0italic_x = 0 against F\ud835\udc39Fitalic_F for different values of \u03b1\ud835\udefc\\alphaitalic_\u03b1 and w0subscript\ud835\udc640w_{0}italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 256 |
+
"url": "http://arxiv.org/html/2305.14254v2/x13.png"
|
| 257 |
+
},
|
| 258 |
+
"8(c)": {
|
| 259 |
+
"figure_path": "2305.14254v2_figure_8(c).png",
|
| 260 |
+
"caption": "(c) The maximum value y\ud835\udc66yitalic_y on the free boundary at x=0\ud835\udc650x=0italic_x = 0 against F\ud835\udc39Fitalic_F with w0=0.5subscript\ud835\udc6400.5w_{0}=0.5italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.5 for different values of \u03b1\ud835\udefc\\alphaitalic_\u03b1.\nFigure 8: The maximum value y0subscript\ud835\udc660y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT on the free boundary at x=0\ud835\udc650x=0italic_x = 0 against F\ud835\udc39Fitalic_F for different values of \u03b1\ud835\udefc\\alphaitalic_\u03b1 and w0subscript\ud835\udc640w_{0}italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 261 |
+
"url": "http://arxiv.org/html/2305.14254v2/x14.png"
|
| 262 |
+
},
|
| 263 |
+
"8(d)": {
|
| 264 |
+
"figure_path": "2305.14254v2_figure_8(d).png",
|
| 265 |
+
"caption": "(d) The maximum value y\ud835\udc66yitalic_y on the free boundary at x=0\ud835\udc650x=0italic_x = 0 against F\ud835\udc39Fitalic_F with w0=0.7subscript\ud835\udc6400.7w_{0}=0.7italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 0.7 for different values of \u03b1\ud835\udefc\\alphaitalic_\u03b1.\nFigure 8: The maximum value y0subscript\ud835\udc660y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT on the free boundary at x=0\ud835\udc650x=0italic_x = 0 against F\ud835\udc39Fitalic_F for different values of \u03b1\ud835\udefc\\alphaitalic_\u03b1 and w0subscript\ud835\udc640w_{0}italic_w start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 266 |
+
"url": "http://arxiv.org/html/2305.14254v2/x15.png"
|
| 267 |
+
}
|
| 268 |
+
},
|
| 269 |
+
"validation": true,
|
| 270 |
+
"references": [
|
| 271 |
+
{
|
| 272 |
+
"1": {
|
| 273 |
+
"title": "Special Issue on Modeling Error Estimation and Adaptive Modeling.",
|
| 274 |
+
"author": "K. van der Zee, E. van Brummelen, I. Akkerman, and R. de Borst,\nGoal-oriented error estimation and adaptivity for fluid\u2013structure\ninteraction using exact linearized adjoints, Computer Methods in Applied\nMechanics and Engineering, 200 (2011), pp. 2738\u20132757,\nhttps://doi.org/https://doi.org/10.1016/j.cma.2010.12.010,\nhttps://www.sciencedirect.com/science/article/pii/S0045782510003555.",
|
| 275 |
+
"venue": null,
|
| 276 |
+
"url": null
|
| 277 |
+
}
|
| 278 |
+
}
|
| 279 |
+
],
|
| 280 |
+
"url": "http://arxiv.org/html/2305.14254v2"
|
| 281 |
+
}
|
20240921/2310.19902v2.json
ADDED
|
@@ -0,0 +1,201 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Herd: Using multiple, smaller LLMs to match the performances of proprietary, large LLMs via an intelligent composer",
|
| 3 |
+
"abstract": "Currently, over a thousand LLMs exist that are multi-purpose and are capable of performing real world tasks, including Q&A, text summarization, content generation, etc. However, accessibility, scale and reliability of free models prevents them from being widely deployed in everyday use cases. To address the first two issues of access and scale, organisations such as HuggingFace have created model repositories where users have uploaded model weights and quantized versions of models trained using different paradigms, as well as model cards describing their training process. While some models report performance on commonly used benchmarks, not all do, and interpreting the real world impact of trading off performance on a benchmark for model deployment cost, is unclear. Here, we show that a herd of open source models can match or exceed the performance of proprietary models via an intelligent router. We show that a Herd of open source models is able to match the accuracy of ChatGPT, despite being composed of models that are effectively 2.5x smaller. We show that in cases where GPT is not able to answer the query, Herd is able to identify a model that can, at least 40% of the time.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Large language models have found novel ways to increase the number of use cases, such as by expanding the number of parameters, combining existing models to augment a single models\u2019 functionality and quanitizing large models to fit on smaller devices [4 ###reference_b4###, 12 ###reference_b12###, 9 ###reference_b9###, 18 ###reference_b18###, 2 ###reference_b2###, 8 ###reference_b8###, 13 ###reference_b13###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###]. The rapid expansion of model availability has created a significant challenge in practice, where corporations want to expose performant LLM endpoints for their users, and have to spend time evaluating models to find the best one that works for them in practice. To overcome this problem, engineers often resort to proprietary models without knowing if there are open-source models available at a comparable performance standard.\n###figure_1### This often leads to the problem elaborated in Figure 1 ###reference_###, showing examples of questions taken from MMLU that ChatGPT (GPT 3.5 Turbo) answers incorrectly, but there is some open source model that can answer the question correctly. We use this insight to try and construct a herd of models such that at least one model in the herd can answer any incoming query correctly.\nRecent model evaluation frameworks [6 ###reference_b6###, 19 ###reference_b19###] help users compare LLMs against each other, but the growing pace of model formats, outpaces one-size-fits-all comparison software suites. Empirical evidence in this work, reveals that open source models have caught up with leading proprietary models, but not all open source models feature on leaderboards, due to their vast number.\nDeployment of models also remains a key challenge. The 70b parameter Llama-2, in 16-bit precision, requires 2 80Gb A100 GPUs, and in practice, users might want several models running in parallel. Sacrificing parameter count to cut costs risks performance degradation, the exact magnitude of which is unknown before deployment.\nWhile quantized models might alleviate some of the challenges associated with model deployment, finding performant quantized models, navigating their formats and knowing their training details, such as what datasets were used in their quantisation calibration, requires expertise.\nIn addition to quantized variants of models, specific model variants exist with chat capabilities, with different performance metrics from non-chat models. Others with more specific domain expertises such as science or code [17 ###reference_b17###, 1 ###reference_b1###], might be useful for some user applications but aren\u2019t fine-tuned for chat capability, making it harder to pick one model to use in production.\nToday the Huggingface (HF) model repository contains 24,000 machine learning models for text generation. While model cards might provide some insight into the dataset that a model is trained on, common practices such as fine-tuning models using inputs from other large language models or model merging [10 ###reference_b10###, 16 ###reference_b16###, 14 ###reference_b14###, 11 ###reference_b11###] has made it difficult to track what data was used to train the model. This has also made it challenging to track what datasets or tasks one can expect the models to be performant on. Futher, not all open source models have detailed model cards, making trusting them in deployment even more challenging.\nTogether, it would be a useful service to expose an endpoint that would process an incoming users\u2019 request by abstracting away model selection. Here, we explore the advantage of exposing a model herd of open source models, which outperforms a larger, proprietary large language model, offering size advantages. We also train a Tryage router [7 ###reference_b7###] to predict model performance, and show that the model herd is able to answer 74% of incoming queries with performance comparable to or better than ChatGPT."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Demonstrating Herd",
|
| 15 |
+
"text": "###figure_2### We find that a herd of open source models is able to beat ChatGPT (Figure 2 ###reference_###) despite being effectively less than 30% of the size (effective size measured as the average size of models weighted by the number of examples allocated to them. Further, none of the models in the herd were individually better than ChatGPT, but together, they were able to surpass ChatGPT\u2019s performance. Further, all the models are open source, and the herd can be seamlessly expanded, contracted or interchanged for other models.\n###figure_3### ###figure_4### ###figure_5### We trained a tryage router [7 ###reference_b7###] to model the performances of a herd and found that the router was able to successfully allocate incoming queries to models that produced aggregate performance comparable to GPT 3.5 Turbo despite being effectively 2.5x smaller 3(a) ###reference_.sf1### 111exact number of parameters in ChatGPT (GPT 3.5 Turbo) unknown, based on reported information. Further, some models in the herd are quantized, meaning they can be run on edge compute / cloud compute - a user can trade off the size of a herd for compute cost.\nWe show that Herd can capture knowledge in cases where ChatGPT fails to answer an incoming query. While any single model might not be able to answer all the incoming queries, Herd is able to find a model that can answer each query, based on the input text of the prompt. ChatGPT is only able to beat a herd of open source models 26% of the time, implying 74% of the queries can be answered by open source models (Fig. 3(b) ###reference_.sf2###, \u2018beat\u2019 is defined as F1 in excess of 5%).\nIn the cases where ChatGPT was wrong, defined as when ChatGPT had an F1 score of less than 0.9, Herd was able to achieve a correct answer (defined as when any model in the Herd had an F1 score greater than 0.95), 69.3% of the time. A predictive router, was able to identify a model that can answer the query correctly, 40% of the time (Tryage bar in Fig. 3(c) ###reference_.sf3###). The mean of the F1s of the answers from each model, as well as the aggregate F1s from Herd and the predictive router, are shown in Figure 3(c) ###reference_.sf3###."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Conclusion and discussion",
|
| 21 |
+
"text": "In this work we present the result that a Herd of open-sourced models can achieve performance comparable or better than ChatGPT, at a fraction of the compute cost and zero query cost. Further, when proprietary models cannot answer a query, a herd of open source models, are able to cover a significant portion of the deficit. This system offers a new model paradigm to compete against closed source models, by leveraging widely available open source technology."
|
| 22 |
+
}
|
| 23 |
+
],
|
| 24 |
+
"appendix": [],
|
| 25 |
+
"tables": {},
|
| 26 |
+
"image_paths": {
|
| 27 |
+
"1": {
|
| 28 |
+
"figure_path": "2310.19902v2_figure_1.png",
|
| 29 |
+
"caption": "Figure 1: In practice, not all models are able to answer all questions accurately (the ones that do answer the questions correctly have their answers boxed in green), which leads to the practical challenge in picking an ensemble of models that has at least one highly performant model for every question. Herd attempts to solve this problem by constructing a herd of large language models that collectively can answer the query accurately, and by learning the association between input text and performance of each LLM.",
|
| 30 |
+
"url": "http://arxiv.org/html/2310.19902v2/extracted/5869399/Herd_Figures/Fig1_Some_models_work_sometimes/Herd.png"
|
| 31 |
+
},
|
| 32 |
+
"2": {
|
| 33 |
+
"figure_path": "2310.19902v2_figure_2.png",
|
| 34 |
+
"caption": "Figure 2: Open source model Herds outperform proprietary models such as ChatGPT on MMLU with decreased model size.",
|
| 35 |
+
"url": "http://arxiv.org/html/2310.19902v2/extracted/5869399/Herd_Figures/Fig2_Perf_vs_size_binned/performance_vs_size_binned.png"
|
| 36 |
+
},
|
| 37 |
+
"3(a)": {
|
| 38 |
+
"figure_path": "2310.19902v2_figure_3(a).png",
|
| 39 |
+
"caption": "((a))\nFigure 3: a) A router trained to model the performance of a herd offers comparable performance to GPT 3.5 Turbo (mean performances shown as horizontal lines). b) GPT exceeds the performance of the Herd in only 26% of incoming queries, implying 74% of incoming queries can be answered by open source models in the Herd. c) In questions that ChatGPT gets wrong the Herd can find models that perform correctly (Average of 0.9 F1). A routing model, achieves an aggregate of 0.76 F1 on these questions.",
|
| 40 |
+
"url": "http://arxiv.org/html/2310.19902v2/extracted/5869399/Herd_Figures/Fig3_Thresholded_performance/gpt_vs_herd.png"
|
| 41 |
+
},
|
| 42 |
+
"3(b)": {
|
| 43 |
+
"figure_path": "2310.19902v2_figure_3(b).png",
|
| 44 |
+
"caption": "((b))\nFigure 3: a) A router trained to model the performance of a herd offers comparable performance to GPT 3.5 Turbo (mean performances shown as horizontal lines). b) GPT exceeds the performance of the Herd in only 26% of incoming queries, implying 74% of incoming queries can be answered by open source models in the Herd. c) In questions that ChatGPT gets wrong the Herd can find models that perform correctly (Average of 0.9 F1). A routing model, achieves an aggregate of 0.76 F1 on these questions.",
|
| 45 |
+
"url": "http://arxiv.org/html/2310.19902v2/extracted/5869399/Herd_Figures/Fig3_Thresholded_performance/gpt_clearly_better.png"
|
| 46 |
+
},
|
| 47 |
+
"3(c)": {
|
| 48 |
+
"figure_path": "2310.19902v2_figure_3(c).png",
|
| 49 |
+
"caption": "((c))\nFigure 3: a) A router trained to model the performance of a herd offers comparable performance to GPT 3.5 Turbo (mean performances shown as horizontal lines). b) GPT exceeds the performance of the Herd in only 26% of incoming queries, implying 74% of incoming queries can be answered by open source models in the Herd. c) In questions that ChatGPT gets wrong the Herd can find models that perform correctly (Average of 0.9 F1). A routing model, achieves an aggregate of 0.76 F1 on these questions.",
|
| 50 |
+
"url": "http://arxiv.org/html/2310.19902v2/extracted/5869399/Herd_Figures/Fig3_Thresholded_performance/gpt_vs_herd_where_gpt_wrong.png"
|
| 51 |
+
}
|
| 52 |
+
},
|
| 53 |
+
"validation": true,
|
| 54 |
+
"references": [
|
| 55 |
+
{
|
| 56 |
+
"1": {
|
| 57 |
+
"title": "Towards Efficient Post-training Quantization of Pre-trained\nLanguage Models.",
|
| 58 |
+
"author": "Haoli Bai, Lu Hou, Lifeng Shang, Xin Jiang, Irwin King, and Michael R Lyu.",
|
| 59 |
+
"venue": null,
|
| 60 |
+
"url": null
|
| 61 |
+
}
|
| 62 |
+
},
|
| 63 |
+
{
|
| 64 |
+
"2": {
|
| 65 |
+
"title": "On the opportunities and risks of foundation models.",
|
| 66 |
+
"author": "Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney\nvon Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma\nBrunskill, et al.",
|
| 67 |
+
"venue": "arXiv preprint arXiv:2108.07258, 2021.",
|
| 68 |
+
"url": null
|
| 69 |
+
}
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"3": {
|
| 73 |
+
"title": "Language models are few-shot learners.",
|
| 74 |
+
"author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla\nDhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell,\net al.",
|
| 75 |
+
"venue": "Advances in neural information processing systems,\n33:1877\u20131901, 2020.",
|
| 76 |
+
"url": null
|
| 77 |
+
}
|
| 78 |
+
},
|
| 79 |
+
{
|
| 80 |
+
"4": {
|
| 81 |
+
"title": "Qlora: Efficient finetuning of quantized llms.",
|
| 82 |
+
"author": "Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer.",
|
| 83 |
+
"venue": "arXiv preprint arXiv:2305.14314, 2023.",
|
| 84 |
+
"url": null
|
| 85 |
+
}
|
| 86 |
+
},
|
| 87 |
+
{
|
| 88 |
+
"5": {
|
| 89 |
+
"title": "A framework for few-shot language model evaluation, September 2021.",
|
| 90 |
+
"author": "Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles\nFoster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff,\nJason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang,\nand Andy Zou.",
|
| 91 |
+
"venue": null,
|
| 92 |
+
"url": null
|
| 93 |
+
}
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"6": {
|
| 97 |
+
"title": "Tryage: Real-time, intelligent Routing of User Prompts to\nLarge Language Models, August 2023.",
|
| 98 |
+
"author": "Surya Narayanan Hari and Matt Thomson.",
|
| 99 |
+
"venue": "arXiv:2308.11601 [cs].",
|
| 100 |
+
"url": null
|
| 101 |
+
}
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"7": {
|
| 105 |
+
"title": "Masked autoencoders are scalable vision learners.",
|
| 106 |
+
"author": "Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll\u00e1r, and Ross\nGirshick.",
|
| 107 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and\nPattern Recognition, pages 16000\u201316009, 2022.",
|
| 108 |
+
"url": null
|
| 109 |
+
}
|
| 110 |
+
},
|
| 111 |
+
{
|
| 112 |
+
"8": {
|
| 113 |
+
"title": "LoRA: Low-Rank Adaptation of Large Language Models,\nOctober 2021.",
|
| 114 |
+
"author": "Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean\nWang, Lu Wang, and Weizhu Chen.",
|
| 115 |
+
"venue": "arXiv:2106.09685 [cs].",
|
| 116 |
+
"url": null
|
| 117 |
+
}
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"9": {
|
| 121 |
+
"title": "Dataless Knowledge Fusion by Merging Weights of Language\nModels, June 2023.",
|
| 122 |
+
"author": "Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, and Pengxiang Cheng.",
|
| 123 |
+
"venue": "arXiv:2212.09849 [cs].",
|
| 124 |
+
"url": null
|
| 125 |
+
}
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"10": {
|
| 129 |
+
"title": "Orca: Progressive learning from complex explanation traces of gpt-4,\n2023.",
|
| 130 |
+
"author": "Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid\nPalangi, and Ahmed Awadallah.",
|
| 131 |
+
"venue": null,
|
| 132 |
+
"url": null
|
| 133 |
+
}
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"11": {
|
| 137 |
+
"title": "Gpt-4 technical report, 2023.",
|
| 138 |
+
"author": "OpenAI.",
|
| 139 |
+
"venue": null,
|
| 140 |
+
"url": null
|
| 141 |
+
}
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"12": {
|
| 145 |
+
"title": "Stanford alpaca: An instruction-following llama model, 2023.",
|
| 146 |
+
"author": "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos\nGuestrin, Percy Liang, and Tatsunori B Hashimoto.",
|
| 147 |
+
"venue": null,
|
| 148 |
+
"url": null
|
| 149 |
+
}
|
| 150 |
+
},
|
| 151 |
+
{
|
| 152 |
+
"13": {
|
| 153 |
+
"title": "Stanford alpaca: An instruction-following llama model.",
|
| 154 |
+
"author": "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos\nGuestrin, Percy Liang, and Tatsunori B. Hashimoto.",
|
| 155 |
+
"venue": "https://github.com/tatsu-lab/stanford_alpaca, 2023.",
|
| 156 |
+
"url": null
|
| 157 |
+
}
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"14": {
|
| 161 |
+
"title": "Well-Read Students Learn Better: On the Importance of\nPre-training Compact Models, September 2019.",
|
| 162 |
+
"author": "Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.",
|
| 163 |
+
"venue": "arXiv:1908.08962 [cs].",
|
| 164 |
+
"url": null
|
| 165 |
+
}
|
| 166 |
+
},
|
| 167 |
+
{
|
| 168 |
+
"15": {
|
| 169 |
+
"title": "OpenChat: Advancing Open-source Language Models with Imperfect\nData, 7 2023.",
|
| 170 |
+
"author": "Guan Wang, Sijie Cheng, Qiying Yu, and Changling Liu.",
|
| 171 |
+
"venue": null,
|
| 172 |
+
"url": null
|
| 173 |
+
}
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"16": {
|
| 177 |
+
"title": "BLOOM: A 176B-Parameter Open-Access Multilingual\nLanguage Model, June 2023.",
|
| 178 |
+
"author": "BigScience Workshop, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie\nPavlick, Suzana Ili\u0107, Daniel Hesslow, Roman Castagn\u00e9, Alexandra Sasha\nLuccioni, Fran\u00e7ois Yvon, Matthias Gall\u00e9, Jonathan Tow, Alexander M. Rush,\nStella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang,\nBeno\u00eet Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji\nRuwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu\nNguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo\nLauren\u00e7on, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel,\nAaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna\nRogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue,\nChristopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani,\nDragomir Radev, Eduardo Gonz\u00e1lez Ponferrada, Efrat Levkovizh, Ethan Kim,\nEyal Bar Natan, Francesco De Toni, G\u00e9rard Dupont, Germ\u00e1n Kruszewski, Giada\nPistilli, Hady Elsahar, Hamza Benyamina, Hieu Tran, Ian Yu, Idris Abdulmumin,\nIsaac Johnson, Itziar Gonzalez-Dios, Javier de la Rosa, Jenny Chim, Jesse\nDodge, Jian Zhu, Jonathan Chang, J\u00f6rg Frohberg, Joseph Tobing, Joydeep\nBhattacharjee, Khalid Almubarak, Kimbo Chen, Kyle Lo, Leandro Von Werra, Leon\nWeber, Long Phan, Loubna Ben allal, Ludovic Tanguy, Manan Dey, Manuel Romero\nMu\u00f1oz, Maraim Masoud, Mar\u00eda Grandury, Mario \u0160a\u0161ko, Max Huang, Maximin\nCoavoux, Mayank Singh, Mike Tian-Jian Jiang, Minh Chien Vu, Mohammad A.\nJauhar, Mustafa Ghaleb, Nishant Subramani, Nora Kassner, Nurulaqilla Khamis,\nOlivier Nguyen, Omar Espejel, Ona de Gibert, Paulo Villegas, Peter Henderson,\nPierre Colombo, Priscilla Amuok, Quentin Lhoest, Rheza Harliman, Rishi\nBommasani, Roberto Luis L\u00f3pez, Rui Ribeiro, Salomey Osei, Sampo Pyysalo,\nSebastian Nagel, Shamik Bose, Shamsuddeen Hassan Muhammad, Shanya Sharma,\nShayne Longpre, Somaieh Nikpoor, Stanislav Silberberg, Suhas Pai, Sydney\nZink, Tiago Timponi Torrent, Timo Schick, Tristan Thrush, Valentin Danchev,\nVassilina Nikoulina, Veronika Laippala, Violette Lepercq, Vrinda Prabhu, Zaid\nAlyafeai, Zeerak Talat, Arun Raja, Benjamin Heinzerling, Chenglei Si,\nDavut Emre Ta\u015far, Elizabeth Salesky, Sabrina J. Mielke, Wilson Y. Lee,\nAbheesht Sharma, Andrea Santilli, Antoine Chaffin, Arnaud Stiegler, Debajyoti\nDatta, Eliza Szczechla, Gunjan Chhablani, Han Wang, Harshit Pandey, Hendrik\nStrobelt, Jason Alan Fries, Jos Rozen, Leo Gao, Lintang Sutawika, M. Saiful\nBari, Maged S. Al-shaibani, Matteo Manica, Nihal Nayak, Ryan Teehan, Samuel\nAlbanie, Sheng Shen, Srulik Ben-David, Stephen H. Bach, Taewoon Kim, Tali\nBers, Thibault Fevry, Trishala Neeraj, Urmish Thakker, Vikas Raunak, Xiangru\nTang, Zheng-Xin Yong, Zhiqing Sun, Shaked Brody, Yallow Uri, Hadar Tojarieh,\nAdam Roberts, Hyung Won Chung, Jaesung Tae, Jason Phang, Ofir Press, Conglong\nLi, Deepak Narayanan, Hatim Bourfoune, Jared Casper, Jeff Rasley, Max\nRyabinin, Mayank Mishra, Minjia Zhang, Mohammad Shoeybi, Myriam Peyrounette,\nNicolas Patry, Nouamane Tazi, Omar Sanseviero, Patrick von Platen, Pierre\nCornette, Pierre Fran\u00e7ois Lavall\u00e9e, R\u00e9mi Lacroix, Samyam Rajbhandari,\nSanchit Gandhi, Shaden Smith, St\u00e9phane Requena, Suraj Patil, Tim Dettmers,\nAhmed Baruwa, Amanpreet Singh, Anastasia Cheveleva, Anne-Laure Ligozat, Arjun\nSubramonian, Aur\u00e9lie N\u00e9v\u00e9ol, Charles Lovering, Dan Garrette, Deepak\nTunuguntla, Ehud Reiter, Ekaterina Taktasheva, Ekaterina Voloshina, Eli\nBogdanov, Genta Indra Winata, Hailey Schoelkopf, Jan-Christoph Kalo,\nJekaterina Novikova, Jessica Zosa Forde, Jordan Clive, Jungo Kasai, Ken\nKawamura, Liam Hazan, Marine Carpuat, Miruna Clinciu, Najoung Kim, Newton\nCheng, Oleg Serikov, Omer Antverg, Oskar van der Wal, Rui Zhang, Ruochen\nZhang, Sebastian Gehrmann, Shachar Mirkin, Shani Pais, Tatiana Shavrina,\nThomas Scialom, Tian Yun, Tomasz Limisiewicz, Verena Rieser, Vitaly Protasov,\nVladislav Mikhailov, Yada Pruksachatkun, Yonatan Belinkov, Zachary Bamberger,\nZden\u011bk Kasner, Alice Rueda, Amanda Pestana, Amir Feizpour, Ammar Khan, Amy\nFaranak, Ana Santos, Anthony Hevia, Antigona Unldreaj, Arash Aghagol, Arezoo\nAbdollahi, Aycha Tammour, Azadeh HajiHosseini, Bahareh Behroozi, Benjamin\nAjibade, Bharat Saxena, Carlos Mu\u00f1oz Ferrandis, Daniel McDuff, Danish\nContractor, David Lansky, Davis David, Douwe Kiela, Duong A. Nguyen, Edward\nTan, Emi Baylor, Ezinwanne Ozoani, Fatima Mirza, Frankline Ononiwu, Habib\nRezanejad, Hessie Jones, Indrani Bhattacharya, Irene Solaiman, Irina Sedenko,\nIsar Nejadgholi, Jesse Passmore, Josh Seltzer, Julio Bonis Sanz, Livia Dutra,\nMairon Samagaio, Maraim Elbadri, Margot Mieskes, Marissa Gerchick, Martha\nAkinlolu, Michael McKenna, Mike Qiu, Muhammed Ghauri, Mykola Burynok, Nafis\nAbrar, Nazneen Rajani, Nour Elkott, Nour Fahmy, Olanrewaju Samuel, Ran An,\nRasmus Kromann, Ryan Hao, Samira Alizadeh, Sarmad Shubber, Silas Wang, Sourav\nRoy, Sylvain Viguier, Thanh Le, Tobi Oyebade, Trieu Le, Yoyo Yang, Zach\nNguyen, Abhinav Ramesh Kashyap, Alfredo Palasciano, Alison Callahan, Anima\nShukla, Antonio Miranda-Escalada, Ayush Singh, Benjamin Beilharz, Bo Wang,\nCaio Brito, Chenxi Zhou, Chirag Jain, Chuxin Xu, Cl\u00e9mentine Fourrier,\nDaniel Le\u00f3n Peri\u00f1\u00e1n, Daniel Molano, Dian Yu, Enrique Manjavacas, Fabio\nBarth, Florian Fuhrimann, Gabriel Altay, Giyaseddin Bayrak, Gully Burns,\nHelena U. Vrabec, Imane Bello, Ishani Dash, Jihyun Kang, John Giorgi, Jonas\nGolde, Jose David Posada, Karthik Rangasai Sivaraman, Lokesh Bulchandani,\nLu Liu, Luisa Shinzato, Madeleine Hahn de Bykhovetz, Maiko Takeuchi, Marc\nP\u00e0mies, Maria A. Castillo, Marianna Nezhurina, Mario S\u00e4nger, Matthias\nSamwald, Michael Cullan, Michael Weinberg, Michiel De Wolf, Mina Mihaljcic,\nMinna Liu, Moritz Freidank, Myungsun Kang, Natasha Seelam, Nathan Dahlberg,\nNicholas Michio Broad, Nikolaus Muellner, Pascale Fung, Patrick Haller, Ramya\nChandrasekhar, Renata Eisenberg, Robert Martin, Rodrigo Canalli, Rosaline Su,\nRuisi Su, Samuel Cahyawijaya, Samuele Garda, Shlok S. Deshmukh, Shubhanshu\nMishra, Sid Kiblawi, Simon Ott, Sinee Sang-aroonsiri, Srishti Kumar, Stefan\nSchweter, Sushil Bharati, Tanmay Laud, Th\u00e9o Gigant, Tomoya Kainuma, Wojciech\nKusa, Yanis Labrak, Yash Shailesh Bajaj, Yash Venkatraman, Yifan Xu, Yingxin\nXu, Yu Xu, Zhe Tan, Zhongli Xie, Zifan Ye, Mathilde Bras, Younes Belkada, and\nThomas Wolf.",
|
| 179 |
+
"venue": "arXiv:2211.05100 [cs].",
|
| 180 |
+
"url": null
|
| 181 |
+
}
|
| 182 |
+
},
|
| 183 |
+
{
|
| 184 |
+
"17": {
|
| 185 |
+
"title": "SmoothQuant: Accurate and Efficient Post-Training\nQuantization for Large Language Models.",
|
| 186 |
+
"author": "Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han.",
|
| 187 |
+
"venue": "In Proceedings of the 40th International Conference on\nMachine Learning, pages 38087\u201338099. PMLR, July 2023.",
|
| 188 |
+
"url": null
|
| 189 |
+
}
|
| 190 |
+
},
|
| 191 |
+
{
|
| 192 |
+
"18": {
|
| 193 |
+
"title": "Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.",
|
| 194 |
+
"author": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao\nZhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E.\nGonzalez, and Ion Stoica.",
|
| 195 |
+
"venue": null,
|
| 196 |
+
"url": null
|
| 197 |
+
}
|
| 198 |
+
}
|
| 199 |
+
],
|
| 200 |
+
"url": "http://arxiv.org/html/2310.19902v2"
|
| 201 |
+
}
|
20240921/2311.02578v3.json
ADDED
|
@@ -0,0 +1,463 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Temporal Sequencing of Documents",
|
| 3 |
+
"abstract": "We outline an unsupervised method for temporal rank ordering of sets of historical documents, namely American State of the Union Addresses and DEEDS, a corpus of medieval English property transfer documents. Our method relies upon effectively capturing the gradual change in word usage via a bandwidth estimate for the non-parametric Generalized Linear Models (Fan et al., 1995). The number of possible rank orders needed to search through for cost functions related to the bandwidth can be quite large, even for a small set of documents. We tackle this problem of combinatorial optimization using the Simulated Annealing algorithm, which allows us to obtain the optimal document temporal orders. Our rank ordering method significantly improved the temporal sequencing of both corpora compared to a randomly sequenced baseline. This unsupervised approach should enable the temporal ordering of undated document sets.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The accurate dating of historical and heritage texts is of paramount importance to historians. On the basis of such correctly sequenced texts, historians can examine, judge, and analyze events within the context of a specific time period. Often, only the undated textual contents of the historical documents are available to historians, on the basis of which they must infer the dates of composition (Gervers, 2002 ###reference_b15###). English property-transfer documents (charters or deeds) were selected as one component of this study because of their particular nature. That is, while the earliest surviving examples from the Anglo-Saxon period (c. A.D. 670 to 1066) are invariably dated, only 300 out of a total of approximately 1,600 can be considered originals. Experts have noted that many supposed Anglo-Saxon documents are actually later forgeries, but nonetheless are difficult to distinguish from genuine charters; \u201cThus in some cases, the date given is either demonstrably fictional or suspect\u201d (Robert Getz, personal communications, April 25, 2019) or, in many cases, the charters survived only in later copies made centuries after the date of issue, resulting in genuine errors arising from misreading or miscopying by scribes. Common instances include miscopying of Roman numerals, confusion about a witness\u2019s identity, or the names of reigning monarchs (Whitelock, 1979 ###reference_b42###; Cubitt, 1999 ###reference_b8###). For examples of documents allegedly forged or fabricated to justify social realities and for political motivations, see Hiatt (2004 ###reference_b18###).\nWhen the Anglo-Saxon political and judicial system was largely replaced by that of the Normans following their conquest of England in 1066, an entirely new phenomenon was introduced: the undated charter. From 1066 to circa 1307 (the start of the reign of King Edward II) only about 3% of the million or more known charters issued bear internal dates. Dating was reintroduced to the royal chancery in 1189 under King Richard the Lionhearted, but the example was not followed by the nobility and commoners for five score years more. Compared to Continental charters, which with few exceptions were regularly dated internally for the duration, the first 600 years of the English charter record has always floated on a sea of incertitude.\nIn historical research, the one essential principle is to identify the correct order of events. As is evident from the foregoing, that is one of the most difficult tasks in the profession. It is of far greater concern, however, than for historians alone; in fact, it is common to virtually all avenues of scholarship, not to mention of the many institutions upon which literate society depends. Undated documents are everywhere, leaving lawyers, police and spy agencies, forensic linguists, code breakers, artists and art historians, businesses, real estate agents, medical practitioners, military analysts, philosophers (the list is endless), with the responsibility of determining what event preceded or succeeded another. This study sets the stage for anyone with a series of undated, digitized texts, or even lists, to determine a chronological order thereof without having to undertake the arduous task of examining each document for contextual clues and references to specific events, and of identifying periodization through content analysis, handwriting and/or watermarks. All of these aspects can be accomplished automatically through the temporal sequencing methodology outlined below.\nBy way of examples in which correct temporal ordering was essential, we note that in the financial fraud investigations of Enron Corporation, forensic linguistics was used to analyze emails, memos and internal communications to re-construct the timeline of fraudulent activities even when the timestamps of these evidential materials were not always available (McLean and Elkind, 2003 ###reference_b28###).\nThe Library of Congress contains many written documents from former presidents of the United States. For example, The Papers of Abraham Lincoln111https://www.loc.gov/collections/abraham-lincoln-papers/ ###reference_ncoln-papers/### is a digitized corpus of over 40K documents consisting of the correspondence, notes, memos, personal letters, drafts of speeches of Abraham Lincoln from his time as a lawyer, congressman and then as the 16th president of the United States. Chronological gaps in The Papers remain, as not all of the original letters and documents were meticulously dated or preserved. A proper chronological order would give insight into the President\u2019s evolving thoughts and ideas through a tumultuous period of American History. Another similar example is The Papers of Thomas Jefferson222https://www.loc.gov/collections/thomas-jefferson-papers/ ###reference_ferson-papers/###, a digitized corpus of 25K items consisting of the correspondence of Thomas Jefferson, who was a diplomat, architect, lawyer and the third president of the United States. Besides correspondence, the collection also includes his drafts of The Declaration of Independence, drafts of laws, fragments of his autobiography, his personal notes, including his records of spending and even recipes! Establishing an accurate chronological order of The Papers is crucial in understanding the personal worldview and the evolving visions about the early Republic by one of the prominent Founding Fathers.\nThe medieval Exeter Book333https://www.exeter-cathedral.org.uk/learning-collections/explore-the-collections/the-exeter-book/ ###reference_ng-collections/explore-the-collections/the-exeter-book/### is another example. It is an anthology of Old English poetry and riddles from the 10th century, but the chronological order of none of the texts is known. Establishing a chronological order of the texts would give us a deeper understanding of the evolution of Old English and the literary culture of the Anglo-Saxon people.\nPrevious efforts in document sorting have been directed towards the development of historical language models (Feuerverger et al., 2005 ###reference_b13###, 2008 ###reference_b14###; Tilahun et al., 2016 ###reference_b40###; Gervers et al., 2018 ###reference_b17###). Within the broader field of information retrieval, investigators employ statistical models incorporating temporal aspects of term usage (Swan and Jensen, 2000 ###reference_b33###), study the relationship between time and the retrieval of relevant documents (Li and Croft, 2003 ###reference_b25###), and classify document dates according to time partitions of predefined granularity (De Jong et al., 2005 ###reference_b9###). In preparing web pages that present document lists the accuracy of time-stamping is of paramount importance. In this regard, Kanhabua and N\u00f8rv\u00e5g (2008 ###reference_b20###) and Kanhabua and N\u00f8rv\u00e5g (2009 ###reference_b21###) extended De Jong et al. (2005 ###reference_b9###)\u2019s work by integrating semantic-based techniques into the document pre-processing pipeline, the aim being to improve the temporal precision of web pages and web document searches where the trustworthiness of document time-stamps is often questionable. Lebanon and Zhao (2008 ###reference_b24###) modelled temporal text streams, and Mani and Wilson (2000 ###reference_b26###) extracted temporal expressions such as now, today, and tomorrow from print and broadcast news documents to resolve uncertain temporal expressions. Chambers (2012 ###reference_b6###) presented models that incorporate rich linguistic features about time, whereas Vashishth et al. (2019 ###reference_b41###) employed deep neural network-based methods that also exploit linguistic features about time in order to date the documents. However, as pointed out by Kotsakos et al. (2014 ###reference_b23###), relying upon temporal terms for dating suffers from the drawback that terms \u201ccan be very sparse as well as ambiguous, referring to irrelevant timeframes\u201d. These authors proposed using statistical approaches based on lexical similarity and burstiness \u2014 the sudden increase in the frequency of terms within a time-frame.\nIn the current work, we propose TempSeq444The TempSeq pseudo and source codes used in this paper are available at https://github.com/gitgelila/TempSeq ###reference_### , an unsupervised method for the temporal sequencing or ranking of documents. This approach is designed to be applicable when the only available data are the undated documents that are to be temporally sequenced. TempSeq relies on a \u2018bag-of-words\u2019 approach, and does not make use of linguistic features about time, nor does it use a training set data with time-tag. In addition, our approach does not make use of specific language rules, word representations, or any other metadata information, thus presenting a potentially significant advantage in the task of document temporal ordering. TempSeq relies on measuring word usage drift under the assumption that word usage changes gradually over time, which means that the temporal variability in word usage is low. We model word usage drift via the non-parametric Generalized Linear Models regression (Fan et al., 1995 ###reference_b12###), and we estimate the correct temporal sequencing of the documents to be the one that minimizes, on average, the associated kernel bandwidths (a direct measure of the temporal variability in word usage). To our knowledge, using the variability of word usage drift to ascribe a temporal sequencing for a set of documents is entirely new.\nThe necessity for temporal sequencing of documents arises not only in the field of information retrieval, but also in studies of heritage texts, which frequently lack timestamps, are intentionally ambiguous with respect to time of issue, or can even be outright forgeries. Often, only the textual contents of heritage documents are available, from which one must infer the dates of issue (Gervers, 2002 ###reference_b15###). Furthermore, heritage texts that are available as a training data set can be limited in number, as proportionately few documents have survived across the centuries to the present time, thus necessitating an unsupervised method for inferring document dates. The task at hand is not only to infer the temporal ranking of a collection of documents or corpus, but also to identify the terms that contribute most towards the task of identifying the correct temporal ranking/ordering. We believe that such terms are most likely the temporal signatures for identifying the characteristics of intentionally or inadvertently temporally mislabelled documents, or documents with missing or corrupted timestamps.\nPast and present research about problems arising in document sequencing involve criteria for document classification, for example, by topic, Blei et al. (2003 ###reference_b4###); Blei and Lafferty (2006 ###reference_b3###); McAuliffe and Blei (2008 ###reference_b27###); Taddy (2013 ###reference_b34###, 2015 ###reference_b35###), document indexing Roberts et al. (2016 ###reference_b32###), and document ranking using latent semantic analysis Deerwester et al. (1990 ###reference_b10###); Hofmann (1999 ###reference_b19###). Cohen et al. (1998 ###reference_b7###) consider the problem of machine learning to order instances, as opposed to classifying them, when an algorithm\u2019s output receives feedback in the form of preference judgments, i.e., \u201cstatements indicating which instances should be ranked ahead of the others\u201d (Cohen et al., 1998 ###reference_b7###). However, such lines of research have not directly addressed the problem of document ordering per se. Perhaps the closest approach to dealing with this problem is that of Thinninyam in her dissertation (Thinniyam, 2014 ###reference_b36###). Her approach is based on the notion of similarity (or distance) between two documents, namely the supposition that similar documents are more likely to discuss similar topics, and therefore should have closer underlying temporal footprints. She proposed a linear regression-based approach, with regression of observed document distance measures of spacings between consecutive documents. In a separate approach, Thinnayam also framed the document ordering problem in a Bayesian framework, where the distribution of pairwise distances between documents was modelled conditionally on a timeline vector, where the coordinate value of the vector represents the time interval between the document and a reference document. Herein, a Markov Chain Monte Carlo (MCMC) method was employed to sample from the posterior distribution of the timeline. Thinniyam\u2019s ordering methods fundamentally require pairwise distance measures of documents (i.e., a quantifiable measure of dissimilarity between two documents) to estimate the temporal order of a set of documents within a corpus. Other studies have shown that such measures are prone to yield spuriously high values of similarity due to an abundance of uninformative terms within the documents, including, but not limited only to stop words (Tilahun, 2011 ###reference_b38###); it is not always a straightforward matter to identify these uninformative terms, and the requisite degree of filtration. Moreover, using measures of document distance does not allow identification of the particular words or terms that are essential for determining the predicted temporal rank orders. In addition, the degree to which two documents are similar/dissimilar is highly dependent on the type of distance measures that are used Broder (1997 ###reference_b5###).\nIn contrast to Thinnyam\u2019s previous approach, the present TempSeq method temporally ranks a set of documents even when a reliably dated training dataset is not available, and/or when there is a very limited number of documents in the set. The TempSeq method relies fundamentally on modelling the probability of occurrence of words in a given date range, thereby avoiding the need for a document distance measure. By design, TempSeq also filters out words according to their degree of uninformativeness, thereby allowing us to gain insights into the history underlying the documents by identifying words that are most putatively useful for determining the correct temporal ordering of documents. We test the TempSeq method on two corpora of heritage texts; one written in American English and the other in Latin. When a set of training data is available, TempSeq can optimize the smoothing parameter for temporal sequencing. More importantly, we show for both corpora that the TempSeq temporal sequencing method performed significantly better as compared to random sequencing, in the absence of training data."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Corpora",
|
| 15 |
+
"text": "We evaluated our temporal sequencing methods on two different sets of corpora with time-tags. The first corpus consisted of 240 transcripts of the American State of the Union Address (SOTU), from the years 1790 to 2020. Each transcript had a median average length of 6400 words. This corpus is available from the R package, Arnold (2022 ###reference_b2###). The second corpus is from the Documents of Early England Data Set (DEEDS)555https://deeds.library.utoronto.ca ###reference_deeds.library.utoronto.ca###. From within this corpus, we focused on a set of 11,463 English property conveyance records issued in the years 1120 to 1300. All the records are written in Latin, and have been inspected for content by subject expert historians to accurately verify the date of issue. The Latin documents have a median length of 175 words. We chose to evaluate the TempSeq temporal document sequencing method on this corpus as it consists of documents similar to corresponding DEEDS documents from the Anglo-Saxon period, which, as mentioned in section 1 ###reference_###, have generally unreliable dates. In this project, we considered the DEEDS corpus in two different forms. In the first form, we conflate all the documents written in a given year into single texts, thus yielding 181 conflated DEEDS records, of mean average length of approximately 11,000 words. We denote the conflated collection as \u201cDEEDS-conflated\u201d. In the second form, we denote the entire set of 11,463 unconflated records as \u201cDEEDS-single\u201d."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Outline",
|
| 21 |
+
"text": "When a set of training documents with known dates\nis available, Tilahun et al. (2012 ###reference_b39###) have proposed\nthe \u201cMaximum Prevalence\u201d method for their dating. This approach is based on modelling (on the basis of the training data) a curve that describes the temporal pattern of the probability of occurrence of each word from the undated documents. For example, in the DEEDS corpus, the proposed dating method achieves very reliable date estimates, giving a test set median dating error of 5 years within the 230 year span (Gervers et al., 2018 ###reference_b17###). High accuracy validates an underlying feature of the model, namely useful words for dating a document are those with a non-uniform probability of occurrence across a date range, and showing a gradual change in the variability of their usage changes. Words such as et, de, huic (in Latin), the, to, that, on (in English), and stop words, which appear in consistent proportion at all times, that is to say uninformative words, do not contribute to the date estimation of an undated document.\nIn section 4 ###reference_###, we discuss our modelling approach to estimate the curves best describing the temporal pattern of the probability of occurrence of a given word/phrase, and present examples of such curves. We also examine the properties of the curve estimates (in particular, that of a smoothing parameter) in relation to the bias-variance trade-off, using the form of the bias-variance trade-off to select the optimal curve. This trade-off is at the heart of any statistical learning process. We could perfectly fit training data (zero bias) to a model by including excessive amounts of parameters (excessive under-smoothing in the case of our model). This situation, referred to as over-parametrization, risks overfitting the data because the model learns not only the pattern in the data but also the random noise and fluctuations that are present. When an overfitted model is applied to a test data, the performance is often poor. On the other extreme, when a model is under-fitted (over-smoothed in the case of our model) the effect from random noise is eliminated but at the expense of failing to learn the pattern from the data. The right amount of parameterization is one that balances bias and variance (large bias and low variance). We seek an optimization process which can balance this trade-off. This optimization seeks on the one hand to minimize bias, thereby increasing curve fluctuation to accurately track the empirical values of the probability of word occurrences. At the same time, the optimization minimizes variance, thereby decreasing the amount of curve fluctuation to obtain a smooth curve. The optimally smoothed curve for balancing the trade-off between these demands is a quantifiable parameter value that can be estimated using a \u201crule-of-thumb\u201d smoothing parameter estimate (Fan et al., 1995 ###reference_b12###). In section 5 ###reference_###, we address the problem of temporally ordering a set of documents in the absence of a set of dated training data. To this end, we compute the average value of the optimal smoothing parameters for estimating the probability of occurrence of each word in the documents. Here, we find a close estimate of the correct temporal order of well-spaced subsets of () documents by searching among all possible temporal orderings to identify the highest average optimal smoothing parameter. We carried out this search using combinatorial optimization via the Simulated Annealing algorithm. In section 6 ###reference_###, we evaluate the TempSeq method and present its results for the two distinct corpora \u2014 the DEEDS corpus and the SOTU corpus. In addition, we identify the informative words that enabled TempSeq to establish the correct temporal order for the selected subset of documents. In section 7 ###reference_###, we provide error analysis, and present our general conclusions in section 8 ###reference_###. Theoretical background and operational equations are presented in Annex A ###reference_### and Annex B ###reference_###."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Modelling the Temporal Pattern of Word Usage",
|
| 27 |
+
"text": "Our fundamental assumption is that word usage changes gradually.\nWe model the probability of word usage as a function of time using the local polynomial kernel regression for the generalized linear model, Fan et al. (1995 ###reference_b12###). For further details of this model, see Annex A ###reference_###, section A.1 ###reference_###.\nSuppose represents a sequence of data pairs, where represents the date of the document and denotes the size of the data set, that is to say, the total number of documents in the collection. Let denote the number of occurrences of the word (or term) in document . Finally, let denote the total number of words (or terms) in document . We are interested in estimating the probability of occurrence of the term w at time , which is given by:\nWe define the weight term to be , where is called a kernel function. Typically, is a bell-shaped, non-negative function, with an area under the graph equalling unity.\nThe function decays fast enough to eliminate the contributions of remote data points. (See equation 9 ###reference_###, in Annex A ###reference_###, section A.1 ###reference_###). For the present study, we used the t-distribution function with a low degree of freedom value (equal to ) to adequately weigh distant points.\nAs a weight term, fits a polynomial regression curve around the data in the neighbourhood of , where , called the bandwidth parameter, is the size of the local neighbourhood, such that data points distal from the neighbourhood are down-weighed (for this reason, is also referred to as the curve smoothing parameter). In simple terms, if is very large (highly smoothed), then , thus representing an overall proportional outcome of word which does not change with . On the other extreme, if is very small, then, evaluated at, say, , has the value , which is the proportional outcome of word in the document written at time . In this case, information on the frequency of occurrence of word in documents written at dates near to has been completely ignored in determining the value of . When is very small, the curve overfits, thus fluctuating rapidly to attain the values for each time point . Although it is possible to draw a curve that perfectly describes the empirical probability of occurrence of a word across a date range (i.e., a bias with a value of zero), the consequent high variance of the curve means that, when applied to a test data set, the curve would overfit thus depicting a very inaccurate description of the probability of occurrence of the word of interest. In the field of kernel regression, there has been extensive research on how to select the appropriate bandwidth parameter . However, in implementing the TempSeq method, we have relied on using a rule-of-thumb selection approach (Fan et al. (1995 ###reference_b12###)), which we describe in Annex B ###reference_###, section B.1 ###reference_###. The theoretical details for the derivation of equation 1 ###reference_###, which falls under the locally constant case (), can be found in Annex A ###reference_###, section A.1 ###reference_###.\nRelying on our assumption that change in word usage is gradual, we now focus on the role played by the bandwidth parameter in setting the bias-variance trade-off of the estimator . Figure 1 ###reference_### illustrates the probability of occurrence of the words Drug(s) in the SOTU corpus at different dates and for variable bandwidth settings. The -axis illustrates the calendar year (the time t), ranging from 1790 to 2020. The -axis illustrates the values of . The asterisks show the proportion of occurrences of the words \u201cDrug(s)\u201d in the years for which dated SOTU documents are available. The proportion is greater than zero in a few of the years, but is zero for a few years preceding or following that time. In that circumstance, when the bandwidth value h is small, the resulting estimate, , is highly variable (the closer h approaches zero, the closer its values match the recorded proportion of occurrence of the term in the training data). This behaviour is illustrated by the dashed-line curve of in the figure. When the value of is larger (smoother), then the resultant probability curve , overlaid in solid, has less variability. Conversely, when the value of is very large, the closer the values of , for all , approach the proportion of the real occurrence of the term across all the dates in the document set (illustrated in dotted lines). The bandwidth, which controls both the bias and the variance, is therefore a crucial parameter of the estimator .\nThe optimal amount of smoothing of the data in figure 1 ###reference_### results in the solid curve, which illuminates a clear pattern in the data. The first peak in the figure (coinciding with the presidency of Richard M. Nixon (1970-1974)) refers to the emergence of the so-called War on Drugs, a US-led global policy aimed at the production, distribution, and use of psychoactive drugs, which was presented in Nixon\u2019s State of the Union address of 1972. That address contains the phrases \u2018\u2026 strong new drug treatment programs \u2026\u2019, \u2018\u2026 by continuing our intensified war on drug abuse \u2026\u2019, \u2018\u2026 Special Action Office for Drug Abuse Prevention \u2026\u2019, \u2018\u2026 collective effort by nations throughout the world to eliminate drugs at their source \u2026\u2019, \u2018\u2026 to drive drug traffickers and pushers off the streets of America \u2026\u2019, \u2018\u2026 to curb illicit drug traffic at our borders and within our country \u2026\u2019. His 1974 address contains the phrases \u2018\u2026 the spiraling rise in drug addiction \u2026 \u2019, \u2018\u2026 The Psychotropic Convention \u2026 treaty regulating manufactured drugs worldwide \u2026\u2019 and \u2018\u2026 the drug battle is far from over \u2026\u2019. The first peak extends to President Gerald Ford\u2019s 1976 address, which contains phrases such as, \u2018The sale of hard drugs is tragically on the increase again \u2026\u2019 and \u2018\u2026 shipment of hard drugs \u2026\u2019. The second peak in figure 1 ###reference_### occurs decades later, around the year 2000. Then President William Clinton\u2019s 1998 address contains phrases such as, \u2018\u2026 to crack down on gangs and guns and drugs \u2026\u2019 and \u2018\u2026 the largest anti-drug budget in history \u2026\u2019; his 1999 address has phrases such as \u2018\u2026 if you stay on drugs, you have to stay behind bars \u2026\u2019 and \u2018\u2026 to strengthen the Safe and Drug-Free School Act \u2026\u2019. In his 2000 address, Clinton announced \u2018\u2026 new legislation to go after what these drug barons value the most, their money.\u2019. Clinton also invokes the words drug(s), in the positive sense of insurance coverage for affordable prescription drugs. In that context, he used phrases such as \u2018\u2026 seniors now lack dependable drug coverage \u2026\u2019 and \u2018Lifesaving drugs are an indispensable part of modern medicine \u2026\u2019. In the following years, under President George W. Bush, policy regarding affordable drug coverage becomes a major issue domestically and globally; \u2018\u2026 some form of prescription drug coverage \u2026\u2019 (in the 2001 address), \u2018\u2026 new drugs that are transforming health care in America \u2026\u2019 (in the 2003 address) and \u2018More than 4 million require immediate drug treatment\u2019 (in the 2003 address regarding the lack of antiretroviral drugs in Africa). Thus, the first peak in figure 1 ###reference_### is exclusively related to illicit drug issues, whereas the second peak, some 25 years later, is primarily related to the affordability of prescription drugs.\nFigure 2 ###reference_### illustrates the probability of occurrence of the words Anglicis and Anglis in the DEEDS corpus. These words are often found within the form of address \u2018Franc[igen]is quam(et) Angl[ic]is\u2019 (French and English), such as \u2018\u2026 tam presentibus quam futuris tam Francigen[is] quam Anglicis salutem Sciatis me intuitu dei assensu \u2026\u2019, (\u2026 both present and future, both French and English, greeting. Know that with God\u2019s consent I have [granted] \u2026) and \u2018\u2026 omnibus ministris et fidelibus suis Francis et Anglis de Oxenfordscira \u2026\u2019, (\u2026 to all his French and English ministers and servants of Oxfordshire \u2026). The above form of address was commonly used by French and the English barons of the time to address their subjects. However, after the province of Normandy was conquered by the French in 1204, this form of address virtually fell out of use.\nFigure 3 ###reference_### illustrates the probability of occurrence of a common stop word \u2018de\u2019 (of) in the DEEDS corpus. The asterisks in the figure show the proportion of occurrences of the word across time (1120 to 1300), and the line curve is found by smoothing the proportion of occurrences of the word across those same years. When comparing the smoothed black curves in figure 1 ###reference_### to figure 3 ###reference_###, we see that the latter curve is more uniform across time (except for the years prior to 1125, when fewer documents were available). This behaviour, which we call the principle of temporal uniformity (non-uniformity) of uninformative (informative) words, is shown in the figure where the probability of occurrence of de has no defining temporal feature, and is uniform across the date range. For the purpose of temporally ordering a set of documents, no matter what combination of temporal ordering is evaluated in the TempSeq process, the contribution of uninformative words is immaterial.\nIn all of these figures 1 ###reference_### to 3 ###reference_###, we see that the solid black curves have relatively the most optimal smoothing, as opposed, for example, to the highly variable dashed-line curve in figure 1 ###reference_###. The optimal smoothing for a given curve represents the trade-off between small bias and small variance for the curve estimator. If the data analyst then randomizes the true temporal ordering of word usage, applying the optimal smoothing parameter will now result in a curve estimate that rapidly oscillates (high variance) due to seeking a minimum bias.\n###figure_1### ###figure_2### ###figure_3###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "The TempSeq Method for Temporal Sequencing",
|
| 33 |
+
"text": ""
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "5.1",
|
| 37 |
+
"parent_section_id": "5",
|
| 38 |
+
"section_name": "Determining the Optimal Bandwidth",
|
| 39 |
+
"text": "Let be a set of number of documents that we wish to sequence in temporal order. We assume that all the documents have a unique timestamp, and as such, there are possible orders when two orderings that are the reverse of each other are equated. Without loss of generality, assume that the sequence represents the true temporal rank order of the documents. Let represent a permutation of the ranks and let represent permutation identity, that is to say . For each word , , and temporal rank ordering of the documents under ), we compute the asymptotically optimal bandwidth value for , which we denote as . This bandwidth value is estimated via a \u2018rule-of-thumb\u2019 estimate, the detail of which can be found in Annex B ###reference_###, sections B.1 ###reference_### and B.2 ###reference_###. The formulation of here is subject to the condition (the case of the locally linear regression, equation 10 ###reference_###), which is more accurate than the formulation of in equation 1 ###reference_### (the case of locally constant regression, ). For theoretical details, refer to Annex A ###reference_###, section A.1 ###reference_###.\nFollowing the principle of temporal non-uniformity of informative words, the optimal smoothing parameter will be larger under the correct temporal ordering of the documents, since the curve would not entail such extensive oscillation to obtain a small bias. Therefore, we would generally expect\nto hold for each word . Put another way, the rule-of-thumb bandwidth estimate of a word associated with the correct temporal ordering of documents will be larger than those bandwidth estimates based on incorrect temporal orderings. For a set of documents , we estimate the temporal rank order for the set of documents by first computing\nwhere is the uniform median value of the optimal bandwidths associated with each word666 For a word to be included in the estimations, we required that it occurs in at least two documents as measuring the pattern of word usage fluctuations is a key element. present in the number of documents, and is a proposed temporal ordering. The temporal rank order estimate, , is the rank order which maximizes the term over all possible permutations; stated more succinctly,\nThe estimated temporal rank order is one that results, on average, in the smoothest rule-of-thumb bandwidth estimate of over all the words in the number of documents and over all possible temporal rank orders.\nTo verify our expectation that equation (2 ###reference_###) in fact holds in general, which would imply that also holds, we conducted the following experiment separately on the DEEDS and the SOTU corpora. In each case, we randomly selected a set of ten documents with date gaps of about 20 years, i.e., one tenth of the document history, thus obtaining a trade-off between excessive computational time and fitness of the method for correct ordering. In this computation, only those words that occurred at least once in two separate documents were considered. Based on random permutations of the underlying true temporal rank order, , we computed . For the same set of ten documents, we also computed , when the true temporal order of the documents was not permuted. We ran 100 replications of the above experiment. Figures LABEL:boxplotSofUHsigma, LABEL:boxplotDEEDSHsigma and LABEL:boxplotDEEDSSingleHsigma are the box plots of (bandwidths) for the SOTU, DEEDS-conflated and DEEDS-single corpora, respectively. In each figure, the first box plot is that of where is a random permutation of the true temporal order of the given set of ten documents, and the second box plot depicts the case when the true temporal order of the same ten documents is maintained. As shown by these box plots, optimal bandwidths associated with the true temporal orderings are generally larger (smoother) than those associated with random orderings. In comparing all the box plots in the figures, we see that those associated with DEEDS-single more closely resemble one another. This result is not surprising, since the computation of equation 3 ###reference_### on sets of documents relies upon fewer words than those from the SOTU and DEEDS-conflated corpora.\nWe note that no matter what the permutation of the underlying documents\u2019 sequence, the bandwidth associated with uninformative words remains unchanged. Therefore, the contribution of uninformative words has negligible influence on the estimation of the temporal rank order of the set of documents."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "5.2",
|
| 43 |
+
"parent_section_id": "5",
|
| 44 |
+
"section_name": "Estimation via Simulated Annealing",
|
| 45 |
+
"text": "With an increasing number of documents that we wish to place in order, there is a corresponding increase in the number of permutations required to search exhaustively in order to obtain (equation 4 ###reference_###). For example, when , the requisite number of permutations equals (circa 1.8 million), where a permutation and its reverse are equated. Scaling up to a large number of possible permutations to optimize in an objective function (such as equation 4 ###reference_###) calls for combinatorial optimization. We propose to solve this problem using the well-known Simulated Annealing algorithm (Kirkpatrick et al., 1983 ###reference_b22###).\nIn the current task, we are attempting to find permutations of the temporal rank order of the document sets that maximize . The optimization problem involves a search over the neighbours of a permutation element , and the generating scheme of the candidate solution (neighbourhood) along with its set size are important factors in the performance of Simulated Annealing (Tian et al., 1999 ###reference_b37###). We employ a neighbourhood generating scheme proposed by those authors for the well known Travelling Salesman Problem. The proposed scheme generates a random permutation solution from the current one by reversing and/or moving a subsequence of terms. For example, the sequence , could generate the . In fact, this perturbation scheme (where a random set of subsequences with four terms that are randomly reversed and/or moved), was employed in this paper to generate the candidate neighbours for the Simulated Annealing algorithm. The authors prove that under such a random perturbation scheme to generate random permutation solutions, the Simulated Annealing algorithm converges asymptotically to the set of global optimal solutions."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "6",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Evaluation and Results",
|
| 51 |
+
"text": "For a task of ordering temporally a set of number of documents , as in section 5 ###reference_###, let the true temporal rank order of the documents be . Let represent a permutation of the ranks and let represent the permutation identity (). We measure the extent to which two permutations are in close proximity to one another using the Spearman\u2019s () rank correlation. If is the predicted temporal rank order for the set of documents , then the closer is to unity, the more accurately the predicted order matches the true order (we only consider the absolute value of the correlation since forward and reverse orders are equated, as noted above).\nWe randomly selected sets of 10 documents, dated approximately 24 years apart for the SOTU, and 18 years apart for the DEEDS-conflate corpora. For the random selection, we used systematic sampling, as follows: First, we randomly selected the start date document, and then every 24th year in succession for the annual SOTU, and then for ever 18th year for the annual conflation of DEEDS documents (DEEDS-conflated), or the corresponding randomly selected single DEEDS documents (DEEDS-single). If the subsequently selected year exceeded the range of dates, we cycled through from the start date. Then, we labelled the temporal rank of the resulting documents from 1 to 10. Starting from an initial random temporal order of the ten documents, we estimated their true temporal rank order as described above, and calculated the Spearman correlation for the true and shuffled ordering. In all cases, our analysis was based on words that occurred at least once in two separate documents (otherwise, no information regarding temporal ordering can be inferred). For 100 replications of the above procedure, the median of the absolute value of the correlations between the estimated and the true rank orders was 0.66 for the SOTU corpus and 0.78 for the annually conflated DEEDS corpus. As a baseline comparison, we note that the median of the absolute value of the correlations between the true temporal rank orders and their 100 random permutations was 0.24. In the computation, we only considered words that occurred more than once in the set of 10 documents. As shown in the box plots of figure 6 ###reference_###, the TempSeq method performed significantly better than the baseline correlation. The Wilcox rank sum test and the t-test showed a statistically significant difference between the baseline correlation and the correlations associated with each of the SOTU and the DEEDS-conflated corpora. In both cases, .\nRegarding the DEEDS-single corpus, the TempSeq did not perform as well as for the temporally DEEDS-conflated collection. Although statistically significantly better than the baseline correlations (for tests based on Wilcox rank sum and the t-test, ), the median correlation coefficient between the estimated and the true rank orders was only 0.45 (see figure 7 ###reference_###). This raises the question of why TempSeq under-performed on the DEEDS-single collection as compared to the DEEDS-conflated collection. These two collections differ in that the latter was created by merging all DEEDS documents for a given year, such that the DEEDS-conflated documents had a mean of 11,000 words as compared to only 175 words for the DEEDS-single collection. As such, the temporal sequencing for the DEEDS-single collection according to equation 4 ###reference_### is based on a very small sample of words. Given the requirement that each word should occur at least once in two separate documents within a set, there are correspondingly fewer words informing the estimation of temporal order.\nAs stated in section 1 ###reference_###, the TempSeq approach for document sequencing allows the identification of words that are most informative for determining the correct temporal order. The process that we used to identify and analyze the informative words is as follows: As an example, we first considered a set of 10 documents from the DEEDS-conflated collection for which the TempSeq prediction of ordering, , resulted in a Spearman correlation coefficient of 0.92. Although the total number of unique words across these 10 documents was 15,757, when we considered the number of words that occurred at least once in two documents, the number of the relevant unique words declined to 5,988. Of these 5,988 words, we selected those with frequency values in the top 50th percentile. For each of these selected words, we computed the rule-of-thumb optimal bandwidth values under the predicted TempSeq order, . On the basis of these optimal bandwidths, we extracted words that were in the top 88th percentile for their maximum probability of occurrence score over the temporal rank domain in equation 1 ###reference_###. The first condition ensured that the words under consideration were those that occurred with sufficient frequency (exceeding the median). The second condition ensured that when a set of documents was listed in its correct temporal order, the informative words (i.e., the words that were most useful for attaining the correct temporal ordering using TempSeq method) were those whose occurrence clustered in near-by time periods. In turn, these clusterings resulted in higher word probability occurrence around those same time periods (refer to equation 1 ###reference_###) as compared to the other time periods. Using the above word filtering procedure, we obtained a final list of around 360 relevant words.\nTo interpret meaningfully the 360 relevant words in the list, we compared them with the representative topic terms from an LDA (Latent Dirichlet Allocation, Blei et al. (2003 ###reference_b4###)) topic modelling run on eight topics on the entire DEEDS-single documents (Gervers and Tilahun, 2023 ###reference_b16###). In the pre-processing stage (prior to running the LDA algorithm), each document was split into tri-gram words (sequences of three consecutive words). The topic proportions for each of the documents were aggregated in accordance with their date of issue (see Figure 6.1 in Gervers and Tilahun (2023 ###reference_b16###)). When we examined the dominant topics corresponding to the dates from the set of the 10 documents, we found that the vast majority of the informative words extracted using the procedure outlined above matched the words in the tri-gram topic terms. Some examples of the informative words and their contexts include finalis facta concordia (indicating that the document is a \u2019final concordance made\u2019), anno regni regis (\u2019in the year of the king\u2019s reign\u2019), scripto sigillum meum (\u2019marking the document (with) my seal\u2019, indicating the sealing of a grant), pro omni seruitio (\u2019for all service\u2019, indicating that a transfer was not a simple donation), and perpetuam elemosinam (\u2019in perpetual alms\u2019, indicating that the transfer was a donation).\nFor the SOTU corpus, we similarly selected a set of 10 documents, each one separated by 23 years during the interval from 1810 to 2019. The Spearman correlation coefficient obtained between the TempSeq ordering method and the true orders for this set of documents was 0.95. The informative words were extracted and examined in the same way as described for the DEEDS-conflated corpus. Of a total of 29,537 words, there were 3,957 words that occurred at least once in at least two documents, of which the top 20th percentile of the maximum probability of occurrence score over the temporal rank domain in equation 1 ###reference_### yielded 435 words. For the purposes of illustration, we present three words from the short list of relevant words: Britain, Families, and Court. A bar graph of the frequencies of these words, counted from the selected set of 10 presidential speeches, is illustrated in figure 8 ###reference_### .\nWe then ran an LDA topic modelling with five topics, where in the pre-processing stages, documents were split into bi-grams (sequences of two consecutive words), over the entire SOTU corpus.\nTo enable an interpretation of the words that drove the temporal ordering, we examined the high ranking (top 20) bi-gram words of a topic associated with fiscal and commercial interests of the US. Within this topic, Great Britain was one of the high ranking words. One of the uni-gram forms of the term, Britain, turned up in the list of the top relevant words.\nFrom the 1810 SOTU address by James Madison, the term Britain was invoked in the context of the naval blockade suffered by the US when the Napoleonic Wars had spilled into the Atlantic. Twenty-three years later, Britain was discussed in the context of final settlement on the US North-East boundary and navigational safety concerns; another twenty-three years later, in 1856, Franklin Pierce\u2019s address discussed Britain in the contexts of her desire to dominate the Panama routes (and US refusal thereof), rights to fisheries, increasing trade between the US and British Provinces in North America, and maritime rights regarding immunity from seizure: \u2018\u2026 the private property of subjects and citizens of a belligerent on the high seas \u2026 by the public armed vessels of the other belligerent, except it be contraband.\u2019 Britain was a subject in Ruthford Hayes\u2019 1879 address regarding the settlement of a dispute over rights to fisheries in Canadian waters. In two of the subsequent presidential addresses from the 10 documents, Britain was mentioned once in each, and not mentioned thereafter. Thus, the correct temporal order (and the TempSeq ordering method) of the 10 selected documents optimized a gradual pattern of change in the usage of the word Britain.\nAnother informative word to TempSeq analysis is Families, although it was not ranked highly from LDA, either as a uni-gram or as a portion of a word in a bi-gram. In the set of 10 SOTU documents, the word Families was barely mentioned prior to the address by Harry Truman in 1948. The frequent usage of that word appears in the later time periods. For example, in the 1948 Address, Harry Truman mentioned Families in the contexts of a social safety net and policies aiming to raise the standard of living for ordinary Americans. For example, we note \u2018public housing for low-income families\u2019, the provision of price support for farm commodities to enable \u2018farm families \u2026 to catch up with the standards of living enjoyed in the cities\u2019 and anti-inflation measures to fight the \u2018undermining [of] the living standards of millions of families\u2019. In Lyndon B. Johnson\u2019s 1968 address, the word Families was invoked to boast about the increase in the wealth accumulation of \u2018most American families\u2019 as \u2018more and more families own their own homes \u2026 television sets\u2019. He urged congress to authorize more money to allow \u2018new housing units for low and middle-income families\u2019 to be built in order for \u2018\u2026 thousands of families to become homeowners, not rent-payers\u2019. In William Clinton\u2019s 1996 Address, the word Families is invoked in the context of sheltering \u2018working families\u2019 from the effects of government cuts. In spite of cuts and a shrinking government, Clinton states his belief in the possibilities of cultivating \u2018stronger families\u2019. Further, he speaks of the challenge to \u2018strengthen America\u2019s families\u2019 and thanks his wife for having taught him \u2018the importance of families and children\u2019. Clinton also challenges \u2018America\u2019s families to work harder to stay together\u2019 because \u2018families who stay together not only do better economically, their children do better as well\u2019. Furthermore, the word Families is invoked in the context of health insurance policies \u2013\u2018\u2026 over one million Americans in working families have lost their health insurance\u2019. The 2019 Address by Donald Trump invokes the word Families in his statement \u2018We passed a massive tax cut for working families\u2019. The word Families is also mentioned in the context of victims of criminal violence whom Trump had met \u2013 \u2018I have gotten to know many wonderful angel moms, dads and families\u2019.\nAmong the 10 selected presidential speeches, Court was found to be an informative word, despite not being ranked highly by LDA. Examining the bar graph in figure 8 ###reference_###, there is an abundant usage of the word (34 times) in the 1925 address by Calvin Coolidge. Although occurring a few times in the other prior presidential speeches (except for that of James Madison), the word Court did not occur after this date. Emerging as a great power after the end of World War I, the United States sought a foreign policy with global influence. Calvin Coolidge invoked the word Court primarily in discussing his administration\u2019s support in joining the Permanent Court of International Justice, which had been set-up in 1922. In his speech, Coolidge encouraged the senate to support adherence to the Court by arguing that the United States\u2019 interests would not be negatively affected, for example in stating \u2018\u2026 by supporting the court we do not assume any obligations under the league \u2026\u2019; \u2018\u2026 the statute creating the court shall not be amended without consent \u2026\u2019; \u2018No provision of the statute \u2026 give[s] [the] court any authority to be a political rather than a judicial court\u2019, and \u2018If we support the court, we can never be obliged to submit any case which involves our interests for its decision\u2019.\nThe word Court was also invoked in the 1879 address by Rutherford B. Hayes, although to a lesser extent than compared to that of Calvin Coolidge. Here, the primary contexts of usage involved criminal offenses and court administration. In the context of criminal offences, there is the example of the practice of polygamy in Utah, which would no longer be defended under the constitutional guarantee of religious freedom, \u2018The Supreme Court of the United States has decided the law to be within the legislative power of Congress\u2019. Another instance concerns the urgency to introduce a justice system to prosecute criminals in the newly acquired territory of Alaska: \u2018bill authorizing \u2026 detention of persons charged with criminal offenses, and providing for an appeal to United States courts \u2026\u2019. In the context of court administration, we find the phrases, \u2018The business of the Supreme Court is at present largely in arrears\u2019; \u2018\u2026 magistrates who compose the court can accomplish more than is now done\u2019, and \u2018\u2026 additional circuit judges and the creation of an intermediate court of errors and appeals, which shall relieve the Supreme Court of a part of its jurisdiction \u2026 \u2019.\n###figure_4### ###figure_5### ###figure_6###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "7",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Error Analysis",
|
| 57 |
+
"text": "We conducted an error analysis on the subset of the 100 replication sets of 10 randomly selected documents for which TempSeq under-performed, with a cut-off of correlation coefficients falling below the 10th percentile. For the SOTU corpus, these were the sets of documents for which correlation coefficient between estimated temporal ordering via the TempSeq method and their true temporal ordering was less than 0.27. For the richer DEEDS-conflated corpus, the corresponding threshold correlation was 0.62. When comparing the average bandwidth values, equation (4 ###reference_###), of the estimated temporal orderings for such sets of documents to that of the average bandwidth value under their correct temporal orderings, i.e., , the values of were generally larger for the latter case, as shown in figures 9 ###reference_### and 10 ###reference_###. This reflects the lesser variability of word usage, and the gradual change in word frequency with time. The under-performance of TempSeq on the sets of documents under discussion is therefore explicable by the inadequate search runs of the Simulating Annealing algorithm that searches for the optimal temporal ordering in equation (4 ###reference_###).\n###figure_7### ###figure_8###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "8",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Conclusion",
|
| 63 |
+
"text": "A natural question that arises at this point is whether employing the models used in Large Language Models (LLM), which have shown unprecedented capabilities in understanding and generating natural language, can be used to solve the temporal sequential ordering problem described above. The fundamental problem with the possible application of LLM for the research question posed in this paper is that the amount of textual data required to train an LLM is typically massive (for example, the first BERT model was trained on over 3.3B words, Devlin (2018 ###reference_b11###)). By comparison, the SOTU corpus has only a total of around 1.5M words, and the DEEDS corpus has only a total of around 2M words, with the dates of documents in each corpus spread across two and half centuries. Even to employ static word embeddings, which are ways to represent words as vectors in a multidimensional vector space, the smallest training sets required for reliable word representation using Word2Vec (Mikolov et al., 2013 ###reference_b29###) and GloVe (Pennington et al., 2014 ###reference_b30###) were 24M words (a subset of Google News corpus) and 1B tokens (2010 Wikipedia), respectively. If pre-trained LLM models were to be leveraged for the task at hand, those models would need to have been trained on the right kinds of corpora, in the sense of topics, genre, time periods and language. Moreover, assuming we can obtain effective word representations using LLM, it is not clear how we could use them for the task of temporal ordering of a set of documents. More pertinently, the research question posed in this paper seeks to temporally order a set of documents (for example, sets of ten documents following the TempSeq experiments in sections 5 ###reference_### and 6 ###reference_###) when a training corpus is not necessarily available, as is the case for many heritage texts. Far below the size of data required for LLM, a set of ten documents from SOTU has a total average count of 64K words (an average of 6400 words per Presidential Address), and a set of ten documents from DEEDS has a total average count of 1750 words (an average of 175 words per document).\nMotivated by problems arising in the dating of historical and heritage texts, we set about in this paper to develop a method for assigning temporal rank orders to a sequence of documents when the date of issue is either missing or uncertain. In the historical context, the limited number and length of surviving manuscript texts presents a particular challenge in ordering. Our unsupervised method for document rank ordering relies on the principle that word usage changes gradually, on the scale of decades. Our method effectively captures changing word usage in the DEEDS and SOTU corpora by the bandwidth estimates. As shown (in section 6 ###reference_###), the median of the correlation values for both corpora were significantly higher than the baseline from random ordering of sets of 10 documents. However, when the sizes of each of the documents are composed of fewer words, as in the case of the DEEDS-single, the TempSeq method doesn\u2019t perform as well due to the lack of adequate number of words on which to base the necessary estimates.\nIn practice, a reliable document rank ordering method should furnish the opportunity to identify particularly informative words for estimating the correct document orderings (for example, words with relatively high values of for the optimal ordering). Equally, such a method provides us with the potential to identify anachronistic words, which may have been inserted into documents for nefarious reasons, such as the case of forged documents mentioned in section 1 ###reference_###.\nIn our current experimental design, we selected orderings of documents that were approximately twenty years apart and extending over 200 years; our procedures for ordering performed significantly better than the randomization. In the future, we shall examine the performance of our method when selected documents are separated at variable time intervals. We also plan to examine the temporal rank orderings of documents when the number of words in the documents is extremely limited, namely the particular case of Anglo-Saxon texts.\nAcknowledgments: The authors gratefully acknowledge Prof. Paul Cumming of Bern University for his critical reading of this manuscript."
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [
|
| 67 |
+
{
|
| 68 |
+
"section_id": "Appendix 1",
|
| 69 |
+
"parent_section_id": null,
|
| 70 |
+
"section_name": "Appendix A Annex 1",
|
| 71 |
+
"text": "Suppose represents a sequence of data pairs, where represents the date of the document and denotes the size of the data; that is, the total number of documents in the collection. Let denote the number of occurrences of the word (or term) in document . Let denote the total number of words (or terms) in document . We are interested in modelling the probability of occurrence of the term w at time . The Generalized Linear Models (GLM) for the binomial family is therefore a natural point of departure.\nThe GLM assume that the conditional likelihood of the response , given the explanatory variables , has an exponential family form\nwhere , , and are known functions, but the value of the dispersion parameter, is not necessarily known. The parameter is called the canonical parameter. The conditional mean and variance of the above model can be shown to be\nand\nIn the parametric form of GLM, a function of the conditional expectation is regressed on the variable as\nwhere is a vector of the regression coefficients. If , then is designated as the link function because it links the conditional expectation to the canonical parameter , such that we model . Given independently observed data , the values are estimated by maximizing the joint conditional likelihood (in the form of equation (5 ###reference_###) or equivalently, by maximizing the joint conditional log-likelihood) over the \u2019s:\nIn the binomial family setting, let be the number of trials and the number of successes in the trials. Let be the predictor variable such that , where is the probability of success. Our interest is in estimating the mean of the sample proportion rather than the mean number of successes. Letting , we wish to estimate . Suppose are samples drawn from where . The form of the joint conditional log-likelihood can be written as\nwhere , , , , and Agresti [2002 ###reference_b1###] .\nFor the above binomial example, we model the canonical link function as a polynomial of degree at most () in the predictor variable, and is the vector of coefficients of this polynomial. In viewing the above as a reformulation of equation (6 ###reference_###), we wish to maximize (with respect to the \u2019s)\nOne of the deficiencies from which this model suffers is its lack of flexibility, namely that the optimal values of the \u2019s are global \u2014 a set of parameter values over the entire domain. In the context of the problem addressed in this paper, we do not have a pre-defined idea as to the number of parameters that are necessary to model the probability of occurrence of tokens via the canonical link function (the logit, ) as it varies over a given range of time. Our aim is to model the probability locally \u2014 that is, to relax the global polynomial assumption and to allow the \u2019s to adjust locally within a small neighbourhood of the domain space [Fan et al., 1995 ###reference_b12###].\nThe local modelling approach thus leads to the following new local log-likelihood objective function:\nwhere\nWe define where is a kernel function and the scaled factor is the associated bandwidth. The kernel function is typically a continuous, unimodal, symmetric, and non-negative. It satisfies the condition and decays fast enough to eliminate the contributions of remote data points. The kernel function could be a Gaussian distribution among many other possibilities, although in this paper, we used the t-distribution function with a low degree of freedom value (equal to 5), so as not excessively to discount distant data points. As a weight term, fits a polynomial regression curve around the data in the neighbourhood of , where is the size of the local neighbourhood.\nWe build locally flexible \u2019s at in the neighbourhood of the point for the canonical link function using the following expansion:\nwhere . Maximizing with respect to the \u2019s, when the polynomial order is , which is to say the locally-constant regression case, we obtain\nAllowing to maximize the above expression, we find that\nwhere is an estimate of the probability of success at . When , it follows\nwhere and maximize the above equation. The maximizers can be found using numerical methods, such as that of Newton-Raphson, where the initial value of is set to be the solution for the local polynomial estimator together with . The estimator (which doesn\u2019t have a closed form solution) is given by"
|
| 72 |
+
},
|
| 73 |
+
{
|
| 74 |
+
"section_id": "Appendix 2",
|
| 75 |
+
"parent_section_id": null,
|
| 76 |
+
"section_name": "Appendix B Annex 2",
|
| 77 |
+
"text": "Regarding the notations used in this section, refer to Annex A ###reference_###.\nFan et al. [1995 ###reference_b12###] derive a rule-of-thumb bandwidth parameter estimate for the curve . Recalling that (Annex A ###reference_###, section A.1 ###reference_###), the curve estimate \nwas determined having first estimated the canonical logit function where the local polynomial fitting is for , in equation 8 ###reference_### of Annex A ###reference_###, section A.1 ###reference_###.\nThe error incurred when estimating with is measured using the asymptotic mean squared error (AMSE) criterion:\nWhen and , as ,\nwhere\nThe expansion of above is split into the squared bias and variance terms of , reflecting the bias and variance trade-offs. As the asymptotic expansion shows, low values of the bandwidth parameter decrease the bias at the cost of a high variance (insufficient smoothing). We also note that sparser regions of the design density result in larger variance of the estimator. The unknown terms, such as and would still need to be estimated. A convenient approach is rather to approximate the error for via the asymptotic mean integrated squared error (AMISE) defined to be\nwhere the design density and the weight function are included for stability reasons. According to this error criterion, the optimal bandwidth is given by:\nwhere and\nThe unknown quantities, and can be estimated by fitting a th-degree polynomial parametric fit where . The estimated bandwidth provides us with a rough and quick approach to calculating a bandwidth value to use in practice.\nFor theoretical results related to the estimator of , such as the asymptotic distribution when the bandwidth and (thus allowing us to create a confidence band around the estimator), and for the form of the bias and variance of the estimator when is an interior and a boundary point, refer to Fan et al. [1995 ###reference_b12###].\nAll the computations were performed using the R language and environment for statistical computing, R Core Team [2023 ###reference_b31###]. For codes, refer to https://github.com/gitgelila/TempSeq ###reference_###.\nThe bandwidth value from section 5 ###reference_### is estimated following the above rule-of-thumb procedure to produce . To compute , we first need to estimate the unknown quantities and . For a particular permutation, , of the true temporal sequence of a set of 10 documents, and on which the TempSeq method is to be run (see section 5 ###reference_###), the data has the form\nThe notation identifies the ith document after permutation. counts the number of occurrences of the word in document which has a total of number of words. is the temporal rank of the date of issue of document .\nUsing the glm (generalized linear model) function from the R statistical package, a parametric second degree polynomial logistic regression was fit to . This fit was used to estimate and also . The kernel function is the Student\u2019s t-density function with degree of freedom equal to 5. The weight term was set to equal at each of the temporal positions of the independent variable, and zero elsewhere. The term (in equation 11 ###reference_###) was numerically computed by randomly drawing samples from the Student\u2019s t-distribution with 5 degrees of freedom. The second moment of the Student\u2019s t-distribution with 5 degrees of freedom, ."
|
| 78 |
+
}
|
| 79 |
+
],
|
| 80 |
+
"tables": {},
|
| 81 |
+
"image_paths": {
|
| 82 |
+
"1": {
|
| 83 |
+
"figure_path": "2311.02578v3_figure_1.png",
|
| 84 |
+
"caption": "Figure 1: Asterisks show the proportion of occurrences of the words Drug(s) in the SOTU corpus. The solid curve is based on a larger bandwidth value than that of the dashed-lined curve. The dotted curve (the horizontal dotted line) is based on a very large bandwidth value. Date (time) is the x\ud835\udc65xitalic_x-axis and \u03c0^w,h\u2062(t)subscript^\ud835\udf0b\ud835\udc64\u210e\ud835\udc61\\hat{\\pi}_{w,h}(t)over^ start_ARG italic_\u03c0 end_ARG start_POSTSUBSCRIPT italic_w , italic_h end_POSTSUBSCRIPT ( italic_t ) is the y\ud835\udc66yitalic_y-axis.",
|
| 85 |
+
"url": "http://arxiv.org/html/2311.02578v3/extracted/5869703/RplotDrugs.png"
|
| 86 |
+
},
|
| 87 |
+
"2": {
|
| 88 |
+
"figure_path": "2311.02578v3_figure_2.png",
|
| 89 |
+
"caption": "Figure 2: Asterisks show the proportion of occurrences of the phrase Angl(ic)is in the DEEDS corpus. The solid curve is based on a larger bandwidth value than that of the dashed-lined curve. Date (time) is the x\ud835\udc65xitalic_x-axis and \u03c0^w,h\u2062(t)subscript^\ud835\udf0b\ud835\udc64\u210e\ud835\udc61\\hat{\\pi}_{w,h}(t)over^ start_ARG italic_\u03c0 end_ARG start_POSTSUBSCRIPT italic_w , italic_h end_POSTSUBSCRIPT ( italic_t ) is the y\ud835\udc66yitalic_y-axis.",
|
| 90 |
+
"url": "http://arxiv.org/html/2311.02578v3/extracted/5869703/RplotAngl_ic_is.png"
|
| 91 |
+
},
|
| 92 |
+
"3": {
|
| 93 |
+
"figure_path": "2311.02578v3_figure_3.png",
|
| 94 |
+
"caption": "Figure 3: Asterisks show the proportion of occurrences of the word de (of). The smoothed solid probability curve is uniform across the date range. Date (time) is the x\ud835\udc65xitalic_x-axis and \u03c0^w,h\u2062(t)subscript^\ud835\udf0b\ud835\udc64\u210e\ud835\udc61\\hat{\\pi}_{w,h}(t)over^ start_ARG italic_\u03c0 end_ARG start_POSTSUBSCRIPT italic_w , italic_h end_POSTSUBSCRIPT ( italic_t ) is the y\ud835\udc66yitalic_y-axis.",
|
| 95 |
+
"url": "http://arxiv.org/html/2311.02578v3/extracted/5869703/De.png"
|
| 96 |
+
},
|
| 97 |
+
"4": {
|
| 98 |
+
"figure_path": "2311.02578v3_figure_4.png",
|
| 99 |
+
"caption": "Figure 6: Box plots of the correlation coefficients (in absolute terms) of the estimated rank orders of sets of 10 documents and their true rank orders, replicated 100 times. The first plot corresponds to the State of the Union Address corpus (SOTU), the second to the DEEDS-conflated corpus, and the final plot is the baseline (random).",
|
| 100 |
+
"url": "http://arxiv.org/html/2311.02578v3/x1.png"
|
| 101 |
+
},
|
| 102 |
+
"5": {
|
| 103 |
+
"figure_path": "2311.02578v3_figure_5.png",
|
| 104 |
+
"caption": "Figure 7: Box plots of the correlation coefficients (in absolute terms) of the estimated rank orders of sets of 10 documents and their true rank orders, replicated 100 times. The first plot corresponds to the DEEDS-single corpus, and the second to the baseline (random).",
|
| 105 |
+
"url": "http://arxiv.org/html/2311.02578v3/x2.png"
|
| 106 |
+
},
|
| 107 |
+
"6": {
|
| 108 |
+
"figure_path": "2311.02578v3_figure_6.png",
|
| 109 |
+
"caption": "Figure 8: A bar graph of the frequencies of the usage of the words Britain, Families and Court in each presidential speech, indicated by year.",
|
| 110 |
+
"url": "http://arxiv.org/html/2311.02578v3/x3.png"
|
| 111 |
+
},
|
| 112 |
+
"7": {
|
| 113 |
+
"figure_path": "2311.02578v3_figure_7.png",
|
| 114 |
+
"caption": "Figure 9: On the left side is a box plot illustrating the bandwidth values obtained for the State of the Union (SOTU) documents with the lowest 10th percentile correlations. On the right side is a corresponding box plot illustrating the bandwidth values obtained under the correct temporal ordering for the documents with the lowest 10th percentile correlations.",
|
| 115 |
+
"url": "http://arxiv.org/html/2311.02578v3/x4.png"
|
| 116 |
+
},
|
| 117 |
+
"8": {
|
| 118 |
+
"figure_path": "2311.02578v3_figure_8.png",
|
| 119 |
+
"caption": "Figure 10: On the left side is a box plot illustrating the bandwidth values obtained for the DEEDS-conflated documents with the lowest 10th percentile correlations. On the right side is a box plot illustrating the bandwidth values obtained under the correct temporal ordering for the documents with the lowest 10th percentile correlations.",
|
| 120 |
+
"url": "http://arxiv.org/html/2311.02578v3/x5.png"
|
| 121 |
+
}
|
| 122 |
+
},
|
| 123 |
+
"validation": true,
|
| 124 |
+
"references": [
|
| 125 |
+
{
|
| 126 |
+
"1": {
|
| 127 |
+
"title": "Categorical Data Analysis.",
|
| 128 |
+
"author": "Alan Agresti.",
|
| 129 |
+
"venue": "John Wiley & Sons, Inc., Hoboken, New Jersey, 2002.",
|
| 130 |
+
"url": null
|
| 131 |
+
}
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"2": {
|
| 135 |
+
"title": "United States presidential state of the union addresses, 2022.",
|
| 136 |
+
"author": "Taylor B. Arnold.",
|
| 137 |
+
"venue": "URL https://github.com/statsmaths/sotu/.",
|
| 138 |
+
"url": null
|
| 139 |
+
}
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"3": {
|
| 143 |
+
"title": "Dynamic topic models.",
|
| 144 |
+
"author": "David M Blei and John D Lafferty.",
|
| 145 |
+
"venue": "In Proceedings of the 23rd International Conference on Machine Learning, pages 113\u2013120. ACM, 2006.",
|
| 146 |
+
"url": null
|
| 147 |
+
}
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"4": {
|
| 151 |
+
"title": "Latent Dirichlet allocation.",
|
| 152 |
+
"author": "David M Blei, Andrew Y Ng, and Michael I Jordan.",
|
| 153 |
+
"venue": "Journal of Machine Learning Research, 3(Jan):993\u20131022, 2003.",
|
| 154 |
+
"url": null
|
| 155 |
+
}
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"5": {
|
| 159 |
+
"title": "On the resemblance and containment of documents.",
|
| 160 |
+
"author": "Andrei Z Broder.",
|
| 161 |
+
"venue": "In International Conference on Compression and Complexity of Sequences, pages 21\u201329. IEEE Computer Society, Los Alamitos, California, 1997.",
|
| 162 |
+
"url": null
|
| 163 |
+
}
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"6": {
|
| 167 |
+
"title": "Labeling documents with timestamps: Learning from their time expressions.",
|
| 168 |
+
"author": "Nathanael Chambers.",
|
| 169 |
+
"venue": "In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 98\u2013106. Association for Computational Linguistics, 2012.",
|
| 170 |
+
"url": null
|
| 171 |
+
}
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"7": {
|
| 175 |
+
"title": "Learning to order things.",
|
| 176 |
+
"author": "William W Cohen, Robert E Schapire, and Yoram Singer.",
|
| 177 |
+
"venue": "In Advances in Neural Information Processing Systems, pages 451\u2013457, 1998.",
|
| 178 |
+
"url": null
|
| 179 |
+
}
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"8": {
|
| 183 |
+
"title": "Finding the forger: An alleged decree of the 679 council of Hatfield.",
|
| 184 |
+
"author": "Catherine Cubitt.",
|
| 185 |
+
"venue": "The English Historical Review, 114(459):1217\u20131248, 1999.",
|
| 186 |
+
"url": null
|
| 187 |
+
}
|
| 188 |
+
},
|
| 189 |
+
{
|
| 190 |
+
"9": {
|
| 191 |
+
"title": "Temporal language models for the disclosure of historical text.",
|
| 192 |
+
"author": "Franciska De Jong, Henning Rode, and Djoerd Hiemstra.",
|
| 193 |
+
"venue": "In Humanities, Computers and Cultural Heritage: Proceedings of the XVIth International Conference of the Association for History and Computing (AHC 2005), pages 161\u2013168. Koninklijke Nederlandse Academie van Wetenschappen, 2005.",
|
| 194 |
+
"url": null
|
| 195 |
+
}
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"10": {
|
| 199 |
+
"title": "Indexing by latent semantic analysis.",
|
| 200 |
+
"author": "Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman.",
|
| 201 |
+
"venue": "Journal of the American Society for Information Science, 41(6):391\u2013407, 1990.",
|
| 202 |
+
"url": null
|
| 203 |
+
}
|
| 204 |
+
},
|
| 205 |
+
{
|
| 206 |
+
"11": {
|
| 207 |
+
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding.",
|
| 208 |
+
"author": "Jacob Devlin.",
|
| 209 |
+
"venue": "arXiv preprint arXiv:1810.04805, 2018.",
|
| 210 |
+
"url": null
|
| 211 |
+
}
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"12": {
|
| 215 |
+
"title": "Local polynomial kernel regression for generalized linear models and quasi-likelihood functions.",
|
| 216 |
+
"author": "Jianqing Fan, Nancy E Heckman, and Matt P Wand.",
|
| 217 |
+
"venue": "Journal of the American Statistical Association, 90(429):141\u2013150, 1995.",
|
| 218 |
+
"url": null
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"13": {
|
| 223 |
+
"title": "Distance measures and smoothing methodology for imputing features of documents.",
|
| 224 |
+
"author": "Andrey Feuerverger, Peter Hall, Gelila Tilahun, and Michael Gervers.",
|
| 225 |
+
"venue": "Journal of Computational and Graphical Statistics, 14(2):255\u2013262, 2005.",
|
| 226 |
+
"url": null
|
| 227 |
+
}
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"14": {
|
| 231 |
+
"title": "Using statistical smoothing to date medieval manuscripts.",
|
| 232 |
+
"author": "Andrey Feuerverger, Peter Hall, Gelila Tilahun, and Michael Gervers.",
|
| 233 |
+
"venue": "In Beyond Parametrics in Interdisciplinary Research: Festschrift in Honour of Professor Pranab K. Sen, pages 321\u2013331. Institute of Mathematical Statistics, 2008.",
|
| 234 |
+
"url": null
|
| 235 |
+
}
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"15": {
|
| 239 |
+
"title": "Dating Undated Medieval Charters.",
|
| 240 |
+
"author": "Michael Gervers.",
|
| 241 |
+
"venue": "Boydell & Brewer Ltd, 2002.",
|
| 242 |
+
"url": null
|
| 243 |
+
}
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"16": {
|
| 247 |
+
"title": "Topic modeling and the resolution of a medieval English diplomatic enigma.",
|
| 248 |
+
"author": "Michael Gervers and Gelila Tilahun.",
|
| 249 |
+
"venue": "In \u017darko Vujo\u0161evi\u0107 and Neboj\u0161a Por\u010di\u0107, editors, Archives and Archival Research in the Digital Environment. A Thematic Volume., pages 151\u2013170. Belgrade: University of Belgrade-Faculty of Philosophy, 2023.",
|
| 250 |
+
"url": null
|
| 251 |
+
}
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"17": {
|
| 255 |
+
"title": "The dating of undated medieval charters.",
|
| 256 |
+
"author": "Michael Gervers, Gelila Tilahun, Shima Khoshraftar, Roderick Mitchell, and Ariella Elema.",
|
| 257 |
+
"venue": "Journal of the British Records Association, 53(136):1\u201333, 2018.",
|
| 258 |
+
"url": null
|
| 259 |
+
}
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"18": {
|
| 263 |
+
"title": "The making of medieval forgeries: false documents in fifteenth-century England.",
|
| 264 |
+
"author": "Alfred Hiatt.",
|
| 265 |
+
"venue": "University of Toronto Press, 2004.",
|
| 266 |
+
"url": null
|
| 267 |
+
}
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"19": {
|
| 271 |
+
"title": "Probabilistic latent semantic analysis.",
|
| 272 |
+
"author": "Thomas Hofmann.",
|
| 273 |
+
"venue": "In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, pages 289\u2013296. Morgan Kaufmann Publishers Inc., 1999.",
|
| 274 |
+
"url": null
|
| 275 |
+
}
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"20": {
|
| 279 |
+
"title": "Improving temporal language models for determining time of non-timestamped documents.",
|
| 280 |
+
"author": "Nattiya Kanhabua and Kjetil N\u00f8rv\u00e5g.",
|
| 281 |
+
"venue": "In International Conference on Theory and Practice of Digital Libraries, pages 358\u2013370. Springer, 2008.",
|
| 282 |
+
"url": null
|
| 283 |
+
}
|
| 284 |
+
},
|
| 285 |
+
{
|
| 286 |
+
"21": {
|
| 287 |
+
"title": "Using temporal language models for document dating.",
|
| 288 |
+
"author": "Nattiya Kanhabua and Kjetil N\u00f8rv\u00e5g.",
|
| 289 |
+
"venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 738\u2013741. Springer, 2009.",
|
| 290 |
+
"url": null
|
| 291 |
+
}
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"22": {
|
| 295 |
+
"title": "Optimization by simulated annealing.",
|
| 296 |
+
"author": "Scott Kirkpatrick, C Daniel Gelatt, and Mario P Vecchi.",
|
| 297 |
+
"venue": "Science, 220(4598):671\u2013680, 1983.",
|
| 298 |
+
"url": null
|
| 299 |
+
}
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"23": {
|
| 303 |
+
"title": "A burstiness-aware approach for document dating.",
|
| 304 |
+
"author": "Dimitrios Kotsakos, Theodoros Lappas, Dimitrios Kotzias, Dimitrios Gunopulos, Nattiya Kanhabua, and Kjetil N\u00f8rv\u00e5g.",
|
| 305 |
+
"venue": "In Proceedings of the 37th international ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1003\u20131006. ACM, 2014.",
|
| 306 |
+
"url": null
|
| 307 |
+
}
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"24": {
|
| 311 |
+
"title": "Local likelihood modeling of temporal text streams.",
|
| 312 |
+
"author": "Guy Lebanon and Yang Zhao.",
|
| 313 |
+
"venue": "In Proceedings of the 25th International Conference on Machine Learning, pages 552\u2013559. ACM, 2008.",
|
| 314 |
+
"url": null
|
| 315 |
+
}
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"25": {
|
| 319 |
+
"title": "Time-based language models.",
|
| 320 |
+
"author": "Xiaoyan Li and W Bruce Croft.",
|
| 321 |
+
"venue": "In Proceedings of the twelfth international conference on Information and knowledge management, pages 469\u2013475, 2003.",
|
| 322 |
+
"url": null
|
| 323 |
+
}
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"26": {
|
| 327 |
+
"title": "Robust temporal processing of news.",
|
| 328 |
+
"author": "Inderjeet Mani and George Wilson.",
|
| 329 |
+
"venue": "In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 69\u201376, 2000.",
|
| 330 |
+
"url": null
|
| 331 |
+
}
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"27": {
|
| 335 |
+
"title": "Supervised topic models.",
|
| 336 |
+
"author": "Jon D McAuliffe and David M Blei.",
|
| 337 |
+
"venue": "In Advances in Neural Information Processing Systems, pages 121\u2013128, 2008.",
|
| 338 |
+
"url": null
|
| 339 |
+
}
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"28": {
|
| 343 |
+
"title": "The smartest guys in the room.",
|
| 344 |
+
"author": "Bethany McLean and Peter Elkind.",
|
| 345 |
+
"venue": "The Amazing Rise, 2003.",
|
| 346 |
+
"url": null
|
| 347 |
+
}
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"29": {
|
| 351 |
+
"title": "Efficient estimation of word representations in vector space.",
|
| 352 |
+
"author": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean.",
|
| 353 |
+
"venue": "arXiv preprint arXiv:1301.3781, 2013.",
|
| 354 |
+
"url": null
|
| 355 |
+
}
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"30": {
|
| 359 |
+
"title": "Glove: Global vectors for word representation.",
|
| 360 |
+
"author": "Jeffrey Pennington, Richard Socher, and Christopher D Manning.",
|
| 361 |
+
"venue": "In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532\u20131543, 2014.",
|
| 362 |
+
"url": null
|
| 363 |
+
}
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"31": {
|
| 367 |
+
"title": "R: A Language and Environment for Statistical Computing.",
|
| 368 |
+
"author": "R Core Team.",
|
| 369 |
+
"venue": "R Foundation for Statistical Computing, Vienna, Austria, 2023.",
|
| 370 |
+
"url": null
|
| 371 |
+
}
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"32": {
|
| 375 |
+
"title": "A model of text for experimentation in the social sciences.",
|
| 376 |
+
"author": "Margaret E Roberts, Brandon M Stewart, and Edoardo M Airoldi.",
|
| 377 |
+
"venue": "Journal of the American Statistical Association, 111(515):988\u20131003, 2016.",
|
| 378 |
+
"url": null
|
| 379 |
+
}
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"33": {
|
| 383 |
+
"title": "Timelines: Constructing timelines with statistical models of word usage.",
|
| 384 |
+
"author": "Russell Swan and David Jensen.",
|
| 385 |
+
"venue": "In KDD-2000 Workshop on Text Mining, pages 73\u201380, 2000.",
|
| 386 |
+
"url": null
|
| 387 |
+
}
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"34": {
|
| 391 |
+
"title": "Multinomial inverse regression for text analysis.",
|
| 392 |
+
"author": "Matt Taddy.",
|
| 393 |
+
"venue": "Journal of the American Statistical Association, 108(503):755\u2013770, 2013.",
|
| 394 |
+
"url": null
|
| 395 |
+
}
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"35": {
|
| 399 |
+
"title": "Document classification by inversion of distributed language representations.",
|
| 400 |
+
"author": "Matt Taddy.",
|
| 401 |
+
"venue": "arXiv preprint arXiv:1504.07295, 2015.",
|
| 402 |
+
"url": null
|
| 403 |
+
}
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"36": {
|
| 407 |
+
"title": "On Statistical Sequencing of Document Collections.",
|
| 408 |
+
"author": "Ramya Thinniyam.",
|
| 409 |
+
"venue": "PhD thesis, University of Toronto, 2014.",
|
| 410 |
+
"url": null
|
| 411 |
+
}
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"37": {
|
| 415 |
+
"title": "Application of the simulated annealing algorithm to the combinatorial optimisation problem with permutation property: An investigation of generation mechanism.",
|
| 416 |
+
"author": "Peng Tian, Jian Ma, and Dong-Mo Zhang.",
|
| 417 |
+
"venue": "European Journal of Operational Research, 118(1):81\u201394, 1999.",
|
| 418 |
+
"url": null
|
| 419 |
+
}
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"38": {
|
| 423 |
+
"title": "Statistical Methods for Dating Collections of Historical Documents.",
|
| 424 |
+
"author": "Gelila Tilahun.",
|
| 425 |
+
"venue": "PhD thesis, University of Toronto, 2011.",
|
| 426 |
+
"url": null
|
| 427 |
+
}
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"39": {
|
| 431 |
+
"title": "Dating medieval English charters.",
|
| 432 |
+
"author": "Gelila Tilahun, Andrey Feuerverger, and Michael Gervers.",
|
| 433 |
+
"venue": "The Annals of Applied Statistics, 6(4):1615\u20131640, 2012.",
|
| 434 |
+
"url": null
|
| 435 |
+
}
|
| 436 |
+
},
|
| 437 |
+
{
|
| 438 |
+
"40": {
|
| 439 |
+
"title": "Statistical approaches to the diplomatics of institutional topography.",
|
| 440 |
+
"author": "Gelila Tilahun, Michael Gervers, and Roderick Alexander Mitchell.",
|
| 441 |
+
"venue": "Archiv f\u00fcr Diplomatik, Schriftgeschichte, Siegel- und Wappenkunde, 62(1):351\u2013364, 2016.",
|
| 442 |
+
"url": null
|
| 443 |
+
}
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"41": {
|
| 447 |
+
"title": "Dating documents using graph convolution networks.",
|
| 448 |
+
"author": "Shikhar Vashishth, Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha Talukdar.",
|
| 449 |
+
"venue": "arXiv preprint arXiv:1902.00175, 2019.",
|
| 450 |
+
"url": null
|
| 451 |
+
}
|
| 452 |
+
},
|
| 453 |
+
{
|
| 454 |
+
"42": {
|
| 455 |
+
"title": "English Historical Documents I, 2nd edn.",
|
| 456 |
+
"author": "Dorothy Whitelock.",
|
| 457 |
+
"venue": "Cambridge University Press, 1979.",
|
| 458 |
+
"url": null
|
| 459 |
+
}
|
| 460 |
+
}
|
| 461 |
+
],
|
| 462 |
+
"url": "http://arxiv.org/html/2311.02578v3"
|
| 463 |
+
}
|
20240921/2311.11208v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2311.15153v6.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2311.17404v2.json
ADDED
|
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "VITATECS: A Diagnostic Dataset for Temporal Concept Understanding of Video-Language Models",
|
| 3 |
+
"abstract": "The ability to perceive how objects change over time is a crucial ingredient in human intelligence.\nHowever, current benchmarks cannot faithfully reflect the temporal understanding abilities of video-language models (VidLMs) due to the existence of static visual shortcuts.\nTo remedy this issue, we present VITATECS, a diagnostic VIdeo-Text dAtaset for the evaluation of TEmporal Concept underStanding.\nSpecifically, we first introduce a fine-grained taxonomy of temporal concepts in natural language in order to diagnose the capability of VidLMs to comprehend different temporal aspects.\nFurthermore, to disentangle the correlation between static and temporal information, we generate counterfactual video descriptions that differ from the original one only in the specified temporal aspect. We employ a semi-automatic data collection framework using large language models and human-in-the-loop annotation to obtain high-quality counterfactual descriptions efficiently. Evaluation of representative video-language understanding models confirms their deficiency in temporal understanding, revealing the need for greater emphasis on the temporal elements in video-language research.\nOur dataset is publicly available at https://github.com/lscpku/VITATECS.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "###figure_1### Many important concepts in human languages contain a temporal dimension [11 ###reference_b11###, 22 ###reference_b22###], such as human actions, changes in status, and event order, which are beyond the expressive power of individual static images.\nSuch temporal concepts bring great challenges to video-language learning and are crucial for the generalization capability of intelligent systems in real-life scenarios.\nAlthough these temporal concepts are present in existing text-to-video retrieval [60 ###reference_b60###, 55 ###reference_b55###, 19 ###reference_b19###] or video question answering [59 ###reference_b59###, 64 ###reference_b64###] benchmarks, most of these datasets fail to faithfully assess the temporal understanding ability of Video-Language Models (VidLMs) due to the strong correlation between static objects/scenes and temporal information.\nFor example, in the blue box in Fig. 1 ###reference_###,\neach video can be aligned to its description by merely identifying the static objects such as the fire, the microphone, and the PC case.\nAs a consequence, the models may learn to simply rely on static clues to make predictions, leading to failure in real-world applications that require a genuine understanding of temporal concepts, e.g., to distinguish between the action of \u201cconnecting something to system\u201d and \u201cdisconnecting something from system\u201d as demonstrated by the red box in Fig. 1 ###reference_###.\nPrevious works [20 ###reference_b20###, 5 ###reference_b5###, 48 ###reference_b48###, 24 ###reference_b24###, 16 ###reference_b16###] have pointed out similar issues and provided several solutions. However, they do not properly define and categorize different aspects of temporal information.\nThe lack of a clear definition adds to the difficulty of assessing the precise abilities of VidLMs.\nAdditionally, they often construct evaluation datasets by following certain templates or using synthetic scenes, making them unsuitable for more diverse and realistic scenarios.\nIn light of the drawbacks of current video-language testbeds, we propose a new dataset for VidLMs, VITATECS, to fill the gap for temporal concept understanding evaluation by decoupling temporal information and static information.\nInspired by Winoground [50 ###reference_b50###], to measure the ability of VidLMs to understand and align the temporal concepts, we ask the models to distinguish between the correct caption of a video and a modified version of the caption which contains similar static information and only differs in temporal information.\nTo allow for a more comprehensive and fine-grained evaluation of temporal understanding ability, we summarize several aspects of temporal concepts that are commonly present in video descriptions, including Direction, Intensity, Sequence, Localization, Compositionality and Type, which according to our study cover most of the temporal information in video-language datasets.\nSince collecting high-quality video-text pairs is time-consuming and expensive, we follow previous works in dataset construction [32 ###reference_b32###, 47 ###reference_b47###, 40 ###reference_b40###, 62 ###reference_b62###, 38 ###reference_b38###], and augment existing open-domain video-language datasets by harnessing the world knowledge encoded in pre-trained large language models (LLMs) [39 ###reference_b39###].\nSpecifically, given an annotated video-text pair in the dataset, we ask the LLM to generate a counterfactual description that only differs from the original description in one given temporal aspect using in-context learning [4 ###reference_b4###].\nTo prevent potential mismatch when dealing with complex instructions, we design a human-in-the-loop procedure to filter out low-quality generations by iteratively generating counterfactual descriptions, human labeling, and fine-tuning a filter model.\nIn each iteration, the generated samples are used to update the filter model and the in-context learning exemplar set to boost generation and filtering quality.\nThis annotation framework allows us to construct a 13k+ dataset from 231 human-written counterfactuals while maintaining high quality and diversity.\nBased on our dataset, we conduct a comprehensive evaluation of state-of-the-art video-language understanding models.\nOur findings can be summarized as follows.\nExisting models barely surpass random guesses in many aspects, confirming their general lack of temporal understanding.\nTemporally-adapted image-text models outperform video-text pre-training, but primarily due to better utilization of static clues.\nFailure of text encoders to learn temporal concepts during pre-training is partly responsible for low performance on temporal understanding.\nDifferent video-text datasets tend to invoke different temporal understanding abilities.\nIn summary, our work with VITATECS sheds light on limitations in current VidLMs\u2019 temporal understanding, providing insights for future development."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "VITATECS: Diagnosing Temporal Concept Understanding",
|
| 21 |
+
"text": "In this section, we propose VITATECS, a new dataset for measuring how well VidLMs capture temporal information across modalities. It consists of (video, caption, counterfactual) triples, where the counterfactual description retains the same static information as the original caption while modifying its temporal information in one of the six fine-grained aspects that we define in Sec. 3.1 ###reference_###.\nWe elaborate on the details of our temporal dataset in Sec. 3.2 ###reference_### and the human-in-the-loop annotation framework we devise to facilitate its construction process in Sec. 3.3 ###reference_###."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Fine-Grained Temporal Understanding",
|
| 27 |
+
"text": "###figure_2### Measuring the temporal understanding ability of VidLMs is a challenging task.\nOn one hand, it is not clear how to define and characterize the temporal information in a video.\nPrevious works [48 ###reference_b48###, 24 ###reference_b24###, 40 ###reference_b40###] draw a rough equivalence between temporal information and the actions in the video.\nIn reality, temporal information can emerge in a variety of forms, such as human actions, changes in object status, dynamics of substances, the order of events, etc., and is widely manifested in daily activities.\nOn the other hand, it is infeasible to completely disentangle the temporal information from the static information.\nThe background scenes, objects, and people\u2019s postures are all highly correlated with the temporal information in open-domain videos.\nIf not properly controlled, such static bias would allow models to rely on static clues as shortcuts for making predictions while seemingly learning to capture the temporal information.\nTo achieve high coverage of temporal information in video-language datasets and allow for fine-grained diagnosis of temporal understanding abilities, we identify six aspects of temporal concepts commonly reflected in natural language: Direction, Intensity, Sequence, Localization, Compositionality and Type.\nThese aspects of temporal information are disentangled from static information to different degrees and address different facets of the temporal information in video-language datasets, allowing us to pinpoint the temporal understanding abilities of VidLMs.\nSince our final target is to construct text pairs with aspect-specific modifications, for clarity, we define these aspects in terms of the temporal questions they address and the corresponding modification patterns as follows.\n\u201cDirection\u201d measures the model\u2019s ability to answer the following question: \u201cIn which direction does the status of objects change?\u201d Examples of this aspect include sentence pairs describing opposite spatial movements or one action reversing the effect of the other.\n\u201cIntensity\u201d measures the model\u2019s ability to answer the following question: \u201cHow fast or how intense does the change occur?\u201d Examples of this aspect include counterfactual sentences which change the words that modify the verbs or change the verb to a similar action with subtle differences in the manner it is conducted.\n\u201cSequence\u201d measures the model\u2019s ability to answer the following question: \u201cHow many events are depicted in the video and in what order?\u201d Examples of this aspect usually involve changing the temporal order or number of occurrences of the events.\n\u201cLocalization\u201d measures the model\u2019s ability to answer the following question: \u201cOn which part of the frame does the change occur?\u201d Examples of this aspect include sentence pairs with the same action conducted either in different absolute spatial locations or in different locations in relation to other objects in the video.\n\u201cCompositionality\u201d measures the model\u2019s ability to answer the following question: \u201cWho performed which action and to whom?\u201d Examples of this aspect often include actions with interchanged subjects or objects.\n\u201cType\u201d measures the model\u2019s ability to answer the following question: \u201cWhat is the action depicted in the video?\u201d This aspect contains general alterations to the actions with a less stringent constraint on the static information contained.\nTo validate the coverage of our temporal concept categorization, we randomly sample 200 video-text pairs from MSR-VTT [60 ###reference_b60###] and VATEX [55 ###reference_b55###] and inspect the types of temporal information they contain. We find that for 98% of the samples, their temporal information falls in one of our categories, which demonstrates that our taxonomy is able to achieve high coverage while taking into account the disentanglement from static information."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Dataset Format",
|
| 33 |
+
"text": "Following Winoground [50 ###reference_b50###], we measure the ability of VidLMs to match the videos to their correct descriptions among some well-designed choices as a proxy for their temporal understanding abilities.\nSpecifically, for each aspect , we collect (video, caption, counterfactual) triples where denotes the number of samples for aspect , denotes the video, denotes the true caption of the video, and is the counterfactual description that differs from only in the temporal aspect . Fig. 2 ###reference_### shows examples of our dataset.\n###figure_3###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Human-in-the-Loop Annotation Framework",
|
| 39 |
+
"text": "Due to the heavy expenses of collecting high-quality (video, caption, counterfactual) triples, we present a human-in-the-loop annotation framework for semi-automatic counterfactual generation based on existing (video, caption) datasets.\nAt the core of our framework is a loop consisting of three stages: generation, filtering, and revision.\nIn stage 1, we use in-context learning [4 ###reference_b4###, 10 ###reference_b10###] to generate candidate counterfactuals based on ground-truth video-text pairs with LLMs.\nIn stage 2, the candidates are filtered using a combination of rules, off-the-shelf language understanding models, and fine-tuned language understanding models.\nIn stage 3, we ask human annotators to verify the quality of the candidates and use the high-quality ones to refine the generation process and the filter model.\nThe three stages are conducted on a small subset and are repeated until the filter model achieves satisfactory precision on a held-out evaluation set.\nBelow, we first lay out the criteria for our counterfactual descriptions and then elaborate on the details of each stage.\nDuring our effort to construct the dataset, we found that LLMs encounter some difficulty in following our instructions when generating counterfactual descriptions, possibly due to the reflective nature of our temporal concepts.\nTo enable consistent and high-quality counterfactual generation, we first identify five major criteria for measuring the quality of generated counterfactuals as follows.\nThe counterfactual should neither entail nor be entailed by the caption.\nThe counterfactual should contain roughly the same amount of information as the caption.\nThe counterfactual should be grammatically correct and semantically plausible.\nThe counterfactual should retain the static information in the caption and only change the given aspect of temporal information.\nThe pattern of counterfactual description should be diverse across the entire dataset.\nAmong these desirable properties, criteria (a)-(d) are instance-level criteria we aim to address in both the generation and filtering stages. In contrast, criterion (e) is a dataset-level criterion dealt with in a finalization step after the filter model has converged.\nThroughout our annotation process, we maintain three sets of exemplars: positive set contains sentence pairs that differ only in a given aspect; negative set contains sentence pairs that violate one of the aforementioned criteria (a)-(d); N/A set contains captions that do not describe a certain aspect of the temporal concept.\nThese exemplars serve two purposes: on the one hand, they compose the demonstrations of valid and invalid data samples for in-context learning, which supply the generative language models with clearer and better-informed instructions; on the other hand, they provide supervision signals for the fine-tuning of the filter model.\nThese three sets are initialized with manually annotated examples and expanded semi-automatically to boost the generation and filter model performance as more data samples are generated.\nThe size and examples of the initial exemplar sets are available in the Appendix.\nIn this stage, we draw upon the generative strength of ChatGPT (gpt-3.5-turbo-0613) [39 ###reference_b39###] to generate counterfactual descriptions given the original caption and the desired aspect of variation.\nThe use of in-context learning allows us to capture the different aspects of temporal concepts through carefully-designed instructions and demonstrations.\nSpecifically, we first randomly sample a small subset (500 for each aspect) of (video, caption) pairs from the test sets of two popular video-text retrieval datasets, MSR-VTT [60 ###reference_b60###] and VATEX [55 ###reference_b55###].\nThen, for each (video, caption) pair, we invoke the instruction following and pattern replication abilities of ChatGPT by constructing a prompt consisting of an aspect-specific instruction, demonstrations sampled from the exemplar sets, and the query for which we aim to generate the counterfactual description.\nThe demonstrations are sampled from both and so that the LLM not only learns to generate counterfactual descriptions for valid captions but also learns to recognize which captions do not concern the temporal aspect of interest.\nIn view of the uneven quality of generated examples, we propose to filter the candidates and automatize this procedure using natural language understanding models.\nFirst, we leverage an off-the-shelf natural language inference (NLI) model, Sentence-BERT [43 ###reference_b43###], to filter out examples that do not meet criterion (a), i.e., the cases where one description entails the other.\nThen, to filter out candidates that do not meet criterion (b)-(d), we use a neural network that takes a pair of sentences as input and performs a 7-way classification task, where category 0 corresponds to disqualified generations and categories 1-6 correspond to the six aspects we define.\nConsidering the similarity in task formulation, we initialize the filter model with the same NLI model above.\nThe fine-tuning data consists of samples from both and .\nWe adopt a rigorous decision mechanism that classifies the given sentence pair into one of the six aspects only if the model makes consistent predictions for the pair and its reversed version with high confidence, as we care more about the precision of the filter model than its recall.\nTo guarantee the quality of filtered examples and guide both the in-context learning procedure and the filter model in the right direction, we introduce human supervision to revise the filtering results.\nWe manually check the samples that are predicted to fall in one of the six aspects and correct the wrong predictions.\nNote that, on the one hand, due to the relatively small size of the sampled subset and the rigorous confidence-based filtering procedure, the number of examples for human revision is reduced significantly; on the other hand, human annotators only need to rectify the predicted labels instead of writing the entire counterfactual description.\nTherefore, this revision stage does not require excessive human effort and only incurs acceptable annotation costs.\nWe repeat the generation, filtering, and revision procedure to iteratively enlarge the exemplar sets and refine the filter model.\nIn each iteration, the previously revised examples are incorporated into and according to their labels.\nThis simultaneously augments the demonstration set of in-context learning for better generation quality and provides more training data for fine-tuning the filter model.\nAfter each iteration, the fine-tuned filter model is evaluated on an independently annotated test set.\nWe terminate the iteration once no significant improvement of the filter model is observed.\nAfter the filter model has converged, we perform generation and filtering on a larger scale (20,000 for each aspect) without human revision.\nAs a finalization step, we address the issue of diversity by favoring generations that involve a less common change of verb throughout the dataset when merging the filtered samples.\nOur framework can be easily scaled to generate larger datasets since no more human efforts are required once the filter model has converged.\nIn our case, it only takes 231 human-written descriptions and around 1500 labeling annotations to obtain the final benchmark with 13k+ samples, showing the efficiency of our annotation framework. The statistics of our dataset are shown in Tab. 2 ###reference_###.\nWe also manually check the quality of VITATECS by sampling 100 instances from each aspect and find that 94.8% of them satisfy our criteria. See Appendix A for more details on the quality check process."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Evaluation of Video-Language Models",
|
| 45 |
+
"text": "In this section, we evaluate prevailing VidLMs to examine their temporal understanding ability.\nWe first introduce the evaluation settings and then discuss the findings drawn from our evaluation to facilitate future studies."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "Experimental Setup",
|
| 51 |
+
"text": "In our experiments, we focus on models designed for the video-text retrieval task, which can calculate the similarity score between a video and a text query.\nWe test three pre-trained VidLMs (VIOLET [12 ###reference_b12###], ALPRO [26 ###reference_b26###] and Singularity [24 ###reference_b24###]) and three temporally-adapted image-language models (CLIP4Clip [35 ###reference_b35###], X-Pool [17 ###reference_b17###] and X-CLIP [36 ###reference_b36###]).\nWe also include two recent video large language models, Video-LLaMA [66 ###reference_b66###] and VideoChat [28 ###reference_b28###], as well as pure image-text foundation models such as BLIP [27 ###reference_b27###], which has shown strong performance on zero-shot video-text retrieval.\nA model\u2019s prediction is considered correct if the similarity score of the correct caption is higher than that of the generated counterfactual.\nWe measure the accuracy of the models on each of the six aspects of temporal concepts, and explore a recall-based metric in Sec. 4.3 ###reference_###.\nWe randomly choose 100 samples for each aspect from our dataset and ask five volunteers to help establish a human performance baseline. The annotators are shown a video and two text descriptions at a time and are required to choose the text that best describes the video.\nWe report the average accuracy of the five annotators as the human baseline."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Evaluation Results",
|
| 57 |
+
"text": "As shown in Tab. 3 ###reference_###, although humans can easily match the videos to their correct descriptions with high consistency () and nearly no mistakes, the overall performance of all the evaluated models is still far from expectations.\nNo model achieves an accuracy of over 70% on the temporal aspects other than the relatively easy \u201cType\u201d aspect, which has the strongest correlation with the static information.\nParticularly, on the more temporally demanding aspects (\u201cDirection\u201d, \u201cIntensity\u201d, and \u201cSequence\u201d), the models perform barely over the random baseline (50%).\nConsidering that part of our videos directly comes from MSR-VTT, the poor performance of models fine-tuned on MSR-VTT reaffirms our statement that existing video-language datasets are incapable of assessing the temporal understanding ability of models.\nAmong the models we evaluate, the temporally-adapted image-text models based on CLIP generally outperform the models with video-text pre-training.\nTo further investigate how much the temporal aggregation modules contribute to the temporal understanding abilities of the CLIP-based models, we disable the temporal aggregation module in these models and replace it with a simple mean pooling layer.\nThe results are shown in Tab. 4 ###reference_###.\nContrary to what is expected, disabling the temporal aggregation module only results in a slight drop in performance for X-Pool. It even improves the temporal understanding ability of CLIP4Clip and X-CLIP.\nThis suggests that these temporal aggregation modules are potentially under-trained due to the weak requirement of temporal modeling in video-language datasets like MSR-VTT.\nConsequently, the superiority of the CLIP-based models mainly stems from the effective utilization of the static information in the video instead of a true understanding of the temporal concepts.\nFor a similar reason, image-text models are able to achieve comparable performance on our dataset without further video-text training.\nWe calculate the average cosine similarity between the representations of the original captions and the counterfactual descriptions with different text encoders.\nAs shown in Tab. 5 ###reference_###, both the CLIP text encoder and Sentence-BERT produce highly similar sentence representations for samples in the \u201cSequence\u201d aspect, indicating that the struggle of the evaluated models can partly be explained by the inability of text encoders to recognize the temporal distinction between the captions and the counterfactual descriptions.\nWe also notice that the CLIP text encoder generally produces higher similarity scores even after it is fine-tuned on video-text data.\nThis suggests that the ability to identify temporal concepts in natural language may be lost during the image-text pre-training stage and cannot be recovered by fine-tuning on existing video-language datasets.\nWe conduct a comparison between the performance of VidLMs fine-tuned on different downstream datasets.\nThe results are shown in Tab. 6 ###reference_###.\nWe find that models fine-tuned on different text-to-video retrieval datasets exhibit different temporal understanding abilities.\nFor example, DiDeMo tends to elicit higher accuracy on \u201cLocalization\u201d and \u201cCompositionality\u201d, while LSMDC contributes to better understanding of \u201cIntensity\u201d.\nAlso, since SSv2 only depicts single human actions, it brings benefits on the \u201cDirection\u201d aspect but not on \u201cSequence\u201d understanding, which can be improved by fine-tuning on datasets with longer video duration and dense captions such as YouCook2.\nThis finding advocates the use of diverse videos and captions in the training process."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.3",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Discussions",
|
| 63 |
+
"text": "Previous work [9 ###reference_b9###] on the challenges of Winoground points out that accuracies based on cosine similarity comparison might be too harsh for the models, and it is possible that they under-perform on Winoground because the image-text pairs are out-of-distribution for them.\nThis is also a concern for our dataset, so we follow them by calculating the Recall at on the task of video-to-text retrieval on the entire VITATECS dataset for each aspect.\nSince a video may have multiple caption-counterfactual pairs in our dataset, we choose and show the recalls for captions, counterfactuals, and both descriptions in Tab. 7 ###reference_###.\nWe observe that for both ALPRO and CLIP4Clip, the recalls of captions and counterfactuals are very close.\nThis indicates that the models are able to connect the texts with their corresponding videos through the shared static information, but cannot distinguish between the different temporal information in the caption and the counterfactual.\nTo verify the design of our counterfactual descriptions, we randomly sample 100 instances from each aspect of VITATECS and apply different modification strategies to the original captions.\nSpecifically, we randomly choose 1-3 words in the caption and replace them with its synonym or a random word of the same part of speech. We also experiment with different types of words (nouns, verbs, or adjectives) as the target for replacement.\nThe results are shown in Tab. 8 ###reference_###.\nOn the one hand, we can conclude that discriminating between the original caption and these altered ones is much easier when we randomly replace the words in the caption, even when only one word is changed.\nThis margin is greater when we modify the nouns than when we modify the verbs in the captions, which aligns with our observation that current models rely heavily on static clues to make predictions.\nThis demonstrates that the temporal understanding addressed by our VITATECS is more difficult to solve than simple object or action replacement.\nAlso, the accuracy of the model rises quickly as we increase the number of replaced words, while our VITATECS maintains its difficulty despite showing greater lingual diversity.\nOn the other hand, replacing words with their synonyms without contextual information may change their semantics significantly, as evidenced by the relatively high accuracy of models on these counterfactuals compared to VITATECS.\nThis cautions us against the use of purely lexical methods for counterfactual construction.\nFinally, neither of these replacement methods is able to attach fine-grained labels to the resulting sentence, demonstrating the superiority of our counterfactual design."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Conclusion",
|
| 69 |
+
"text": "This work aims to address the deficiency of temporal understanding evaluation abilities in existing video-language datasets.\nWe present a fine-grained characterization of temporal concepts in video descriptions, and introduce a novel dataset that measures the temporal understanding capabilities of VidLMs by their ability to distinguish between the actual description of a video and its temporally modified alternative.\nTo facilitate dataset construction, we design a human-in-the-loop annotation framework by leveraging LLMs for counterfactual description generation.\nEvaluation of state-of-the-art models demonstrates their failure to fully grasp temporal concepts.\nWe hope our work can provide valuable insight into the future development of video-language understanding research."
|
| 70 |
+
}
|
| 71 |
+
],
|
| 72 |
+
"appendix": [],
|
| 73 |
+
"tables": {
|
| 74 |
+
"1": {
|
| 75 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S2.T1.2.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S2.T1.3.2\" style=\"font-size:90%;\">Comparison with other diagnostic video datasets from four aspects: whether they are video-language datasets, whether they are open-domain, whether they target temporal understanding ability, and whether they contain a fine-grained evaluation of model abilities. </span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.2\">video-language</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.3\">open-domain</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.4\">temporal</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S2.T1.4.1.1.5\">fine-grained</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.4.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.4.2.1.1\">Temporal Dataset\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib48\" title=\"\">48</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.2.1.2\"><span class=\"ltx_text\" id=\"S2.T1.4.2.1.2.1\" style=\"color:#CC0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.2.1.3\"><span class=\"ltx_text\" id=\"S2.T1.4.2.1.3.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.4.2.1.4\"><span class=\"ltx_text\" id=\"S2.T1.4.2.1.4.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S2.T1.4.2.1.5\"><span class=\"ltx_text\" id=\"S2.T1.4.2.1.5.1\" style=\"color:#CC0000;\">\u2717</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.3.2.1\">CATER\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib16\" title=\"\">16</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.3.2.2\"><span class=\"ltx_text\" id=\"S2.T1.4.3.2.2.1\" style=\"color:#CC0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.3.2.3\"><span class=\"ltx_text\" id=\"S2.T1.4.3.2.3.1\" style=\"color:#CC0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.3.2.4\"><span class=\"ltx_text\" id=\"S2.T1.4.3.2.4.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.3.2.5\"><span class=\"ltx_text\" id=\"S2.T1.4.3.2.5.1\" style=\"color:#CC0000;\">\u2717</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.3.1\">CLEVRER\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib63\" title=\"\">63</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.4.3.2\"><span class=\"ltx_text\" id=\"S2.T1.4.4.3.2.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.4.3.3\"><span class=\"ltx_text\" id=\"S2.T1.4.4.3.3.1\" style=\"color:#CC0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.4.3.4\"><span class=\"ltx_text\" id=\"S2.T1.4.4.3.4.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.4.3.5\"><span class=\"ltx_text\" id=\"S2.T1.4.4.3.5.1\" style=\"color:#336600;\">\u2713</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.5.4.1\">SSv2-label\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib24\" title=\"\">24</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.5.4.2\"><span class=\"ltx_text\" id=\"S2.T1.4.5.4.2.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.5.4.3\"><span class=\"ltx_text\" id=\"S2.T1.4.5.4.3.1\" style=\"color:#CC0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.5.4.4\"><span class=\"ltx_text\" id=\"S2.T1.4.5.4.4.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.5.4.5\"><span class=\"ltx_text\" id=\"S2.T1.4.5.4.5.1\" style=\"color:#CC0000;\">\u2717</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.6.5.1\">Contrast set\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib40\" title=\"\">40</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.6.5.2\"><span class=\"ltx_text\" id=\"S2.T1.4.6.5.2.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.6.5.3\"><span class=\"ltx_text\" id=\"S2.T1.4.6.5.3.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.4.6.5.4\"><span class=\"ltx_text\" id=\"S2.T1.4.6.5.4.1\" style=\"color:#CC0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S2.T1.4.6.5.5\"><span class=\"ltx_text\" id=\"S2.T1.4.6.5.5.1\" style=\"color:#CC0000;\">\u2717</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S2.T1.4.7.6.1\">VITATECS (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.4.7.6.2\"><span class=\"ltx_text\" id=\"S2.T1.4.7.6.2.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.4.7.6.3\"><span class=\"ltx_text\" id=\"S2.T1.4.7.6.3.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.4.7.6.4\"><span class=\"ltx_text\" id=\"S2.T1.4.7.6.4.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S2.T1.4.7.6.5\"><span class=\"ltx_text\" id=\"S2.T1.4.7.6.5.1\" style=\"color:#336600;\">\u2713</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 76 |
+
"capture": "Table 1: Comparison with other diagnostic video datasets from four aspects: whether they are video-language datasets, whether they are open-domain, whether they target temporal understanding ability, and whether they contain a fine-grained evaluation of model abilities. "
|
| 77 |
+
},
|
| 78 |
+
"2": {
|
| 79 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T2.2.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S3.T2.3.2\" style=\"font-size:90%;\">Statistics of our dataset including the number of samples, the number of videos, and the average length of the original captions and the counterfactual descriptions. </span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T2.4\" style=\"width:433.6pt;height:85.6pt;vertical-align:-0.9pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-13.5pt,2.6pt) scale(0.94128625261367,0.94128625261367) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.4.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.4.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T2.4.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T2.4.1.1.1.2\">Direction</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.1.1.1.3\">Intensity</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.1.1.1.4\">Sequence</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.1.1.1.5\">Localization</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.1.1.1.6\">Compositionality</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S3.T2.4.1.1.1.7\">Type</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.4.1.2.2.1\"># samples</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.4.1.2.2.2\">3,800</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.1.2.2.3\">779</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.1.2.2.4\">151</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.1.2.2.5\">1,053</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.1.2.2.6\">1,450</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S3.T2.4.1.2.2.7\">6,605</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.4.1.3.3.1\"># videos</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T2.4.1.3.3.2\">2,646</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.1.3.3.3\">692</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.1.3.3.4\">150</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.1.3.3.5\">915</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.1.3.3.6\">1,110</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S3.T2.4.1.3.3.7\">4,287</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.4.1.4.4.1\">Avg. len (caption)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.4.1.4.4.2\">13.6</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.1.4.4.3\">13.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.1.4.4.4\">14.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.1.4.4.5\">14.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.1.4.4.6\">13.9</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S3.T2.4.1.4.4.7\">11.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T2.4.1.5.5.1\">Avg. len (counterfactual)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T2.4.1.5.5.2\">13.8</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.1.5.5.3\">13.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.1.5.5.4\">14.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.1.5.5.5\">14.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.1.5.5.6\">13.9</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S3.T2.4.1.5.5.7\">11.6</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 80 |
+
"capture": "Table 2: Statistics of our dataset including the number of samples, the number of videos, and the average length of the original captions and the counterfactual descriptions. "
|
| 81 |
+
},
|
| 82 |
+
"3": {
|
| 83 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T3.3.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S4.T3.4.2\" style=\"font-size:90%;\">Accuracy (%) of human annotators and state-of-the-art VidLMs on VITATECS. The VidLMs are evaluated on the full dataset, while human performance is marked in <span class=\"ltx_text\" id=\"S4.T3.4.2.1\" style=\"color:#808080;\">gray</span> to indicate it is evaluated only on a randomly sampled subset.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.5\" style=\"width:433.6pt;height:133.3pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-106.8pt,32.7pt) scale(0.669915633127313,0.669915633127313) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.5.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T3.5.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.5.1.1.1.2\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.5.1.1.1.3\">Direction</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.5.1.1.1.4\">Intensity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.5.1.1.1.5\">Sequence</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.5.1.1.1.6\">Localization</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.5.1.1.1.7\">Compositionality</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.5.1.1.1.8\">Type</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.5.1.1.1.9\">Avg.</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.5.1.2.1.1\">BLIP-large\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib27\" title=\"\">27</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.1.2.1.2\">Zero-shot</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.1.2.1.3\">58.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.1.2.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.1.2.1.4.1\">67.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.1.2.1.5\">51.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.1.2.1.6\">66.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.1.2.1.7\">61.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.1.2.1.8\">78.6</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T3.5.1.2.1.9\">64.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.5.1.3.2.1\">Singularity\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib24\" title=\"\">24</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.3.2.2\">MSR-VTT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.3.2.3\">54.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.3.2.4\">61.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.3.2.5\">52.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.3.2.6\">63.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.3.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.1.3.2.7.1\">65.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.3.2.8\">77.4</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.5.1.3.2.9\">62.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.5.1.4.3.1\">ALPRO\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib26\" title=\"\">26</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.4.3.2\">MSR-VTT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.4.3.3\">55.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.4.3.4\">56.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.4.3.5\">45.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.4.3.6\">59.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.4.3.7\">58.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.4.3.8\">74.5</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.5.1.4.3.9\">58.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.5.1.5.4.1\">VIOLET\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib12\" title=\"\">12</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.5.4.2\">MSR-VTT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.5.4.3\">60.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.5.4.4\">62.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.5.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.1.5.4.5.1\">61.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.5.4.6\">60.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.5.4.7\">64.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.5.4.8\">78.2</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.5.1.5.4.9\">64.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.5.1.6.5.1\">CLIP4Clip\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib35\" title=\"\">35</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.6.5.2\">MSR-VTT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.6.5.3\">62.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.6.5.4\">65.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.6.5.5\">51.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.6.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.1.6.5.6.1\">66.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.6.5.7\">63.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.6.5.8\">82.4</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.5.1.6.5.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.1.6.5.9.1\">65.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.5.1.7.6.1\">X-Pool\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib17\" title=\"\">17</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.7.6.2\">MSR-VTT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.7.6.3\">59.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.7.6.4\">63.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.7.6.5\">55.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.7.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.1.7.6.6.1\">66.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.7.6.7\">64.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.7.6.8\">81.3</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.5.1.7.6.9\">65.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.5.1.8.7.1\">X-CLIP\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib36\" title=\"\">36</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.8.7.2\">MSR-VTT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.8.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.1.8.7.3.1\">63.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.8.7.4\">60.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.8.7.5\">55.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.8.7.6\">64.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.8.7.7\">63.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.8.7.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.1.8.7.8.1\">83.2</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.5.1.8.7.9\">65.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.5.1.9.8.1\">Video-LLaMA\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib66\" title=\"\">66</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.9.8.2\">Zero-shot</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.9.8.3\">51.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.9.8.4\">52.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.9.8.5\">56.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.9.8.6\">51.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.9.8.7\">49.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.9.8.8\">51.7</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.5.1.9.8.9\">52.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.10.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.5.1.10.9.1\">VideoChat\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib28\" title=\"\">28</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.10.9.2\">Zero-shot</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.10.9.3\">52.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.10.9.4\">50.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.10.9.5\">46.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.10.9.6\">50.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.10.9.7\">51.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.5.1.10.9.8\">51.0</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.5.1.10.9.9\">50.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.1.11.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T3.5.1.11.10.1\"><span class=\"ltx_text\" id=\"S4.T3.5.1.11.10.1.1\" style=\"color:#808080;\">Human</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.5.1.11.10.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.5.1.11.10.3\"><span class=\"ltx_text\" id=\"S4.T3.5.1.11.10.3.1\" style=\"color:#808080;\">94.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.5.1.11.10.4\"><span class=\"ltx_text\" id=\"S4.T3.5.1.11.10.4.1\" style=\"color:#808080;\">93.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.5.1.11.10.5\"><span class=\"ltx_text\" id=\"S4.T3.5.1.11.10.5.1\" style=\"color:#808080;\">94.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.5.1.11.10.6\"><span class=\"ltx_text\" id=\"S4.T3.5.1.11.10.6.1\" style=\"color:#808080;\">93.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.5.1.11.10.7\"><span class=\"ltx_text\" id=\"S4.T3.5.1.11.10.7.1\" style=\"color:#808080;\">97.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.5.1.11.10.8\"><span class=\"ltx_text\" id=\"S4.T3.5.1.11.10.8.1\" style=\"color:#808080;\">92.2</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.5.1.11.10.9\"><span class=\"ltx_text\" id=\"S4.T3.5.1.11.10.9.1\" style=\"color:#808080;\">94.3</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 84 |
+
"capture": "Table 3: Accuracy (%) of human annotators and state-of-the-art VidLMs on VITATECS. The VidLMs are evaluated on the full dataset, while human performance is marked in gray to indicate it is evaluated only on a randomly sampled subset."
|
| 85 |
+
},
|
| 86 |
+
"4": {
|
| 87 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S4.T4.3.2\" style=\"font-size:90%;\">Accuracy (%) of CLIP-based models with and without temporal aggregation modules</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T4.4\" style=\"width:433.6pt;height:90.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-86.5pt,18.0pt) scale(0.714826404867951,0.714826404867951) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.4.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T4.4.1.1.1.1\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.4.1.1.1.2\">Temporal</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.4.1.1.1.3\">Direction</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.4.1.1.1.4\">Intensity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.4.1.1.1.5\">Sequence</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.4.1.1.1.6\">Localization</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.4.1.1.1.7\">Compositionality</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.4.1.1.1.8\">Type</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.4.1.1.1.9\">Avg.</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.4.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T4.4.1.2.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.4.1.2.1.1.1\">CLIP4Clip\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib35\" title=\"\">35</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.2.1.2\"><span class=\"ltx_text\" id=\"S4.T4.4.1.2.1.2.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.2.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.2.1.3.1\">62.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.2.1.4\">65.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.2.1.5\">51.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.2.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.2.1.6.1\">66.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.2.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.2.1.7.1\">63.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.2.1.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.2.1.8.1\">82.4</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.2.1.9\">65.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.1.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.3.2.1\"><span class=\"ltx_text\" id=\"S4.T4.4.1.3.2.1.1\" style=\"color:#CC0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.3.2.2\">61.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.3.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.3.2.3.1\">67.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.3.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.3.2.4.1\">60.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.3.2.5\">66.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.3.2.6\">62.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.3.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.3.2.7.1\">82.4</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.4.1.3.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.3.2.8.1\">66.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T4.4.1.4.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.4.1.4.3.1.1\">X-CLIP\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib36\" title=\"\">36</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.4.3.2\"><span class=\"ltx_text\" id=\"S4.T4.4.1.4.3.2.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.4.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.4.3.3.1\">63.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.4.3.4\">60.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.4.3.5\">55.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.4.3.6\">64.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.4.3.7\">63.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.4.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.4.3.8.1\">83.2</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.4.3.9\">65.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.1.5.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.5.4.1\"><span class=\"ltx_text\" id=\"S4.T4.4.1.5.4.1.1\" style=\"color:#CC0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.5.4.2\">62.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.5.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.5.4.3.1\">63.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.5.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.5.4.4.1\">59.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.5.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.5.4.5.1\">65.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.5.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.5.4.6.1\">64.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.1.5.4.7\">82.6</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T4.4.1.5.4.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.5.4.8.1\">66.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T4.4.1.6.5.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.4.1.6.5.1.1\">X-Pool\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib17\" title=\"\">17</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.6.5.2\"><span class=\"ltx_text\" id=\"S4.T4.4.1.6.5.2.1\" style=\"color:#336600;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.6.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.6.5.3.1\">60.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.6.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.6.5.4.1\">65.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.6.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.6.5.5.1\">58.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.6.5.6\">65.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.6.5.7\">62.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.6.5.8\">79.9</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T4.4.1.6.5.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.6.5.9.1\">65.2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.1.7.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.4.1.7.6.1\"><span class=\"ltx_text\" id=\"S4.T4.4.1.7.6.1.1\" style=\"color:#CC0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.4.1.7.6.2\">59.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.4.1.7.6.3\">63.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.4.1.7.6.4\">55.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.4.1.7.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.7.6.5.1\">66.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.4.1.7.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.7.6.6.1\">64.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.4.1.7.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.7.6.7.1\">81.3</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T4.4.1.7.6.8\">65.1</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 88 |
+
"capture": "Table 4: Accuracy (%) of CLIP-based models with and without temporal aggregation modules"
|
| 89 |
+
},
|
| 90 |
+
"5": {
|
| 91 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T5.2.1.1\" style=\"font-size:90%;\">Table 5</span>: </span><span class=\"ltx_text\" id=\"S4.T5.3.2\" style=\"font-size:90%;\">Average cosine similarity between the representations of original captions and counterfactual descriptions produced by different text encoders</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T5.4\" style=\"width:433.6pt;height:65.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-82.2pt,12.4pt) scale(0.725187160891341,0.725187160891341) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T5.4.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T5.4.1.1.1.1\">Text Encoder</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.4.1.1.1.2\">Direction</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.4.1.1.1.3\">Intensity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.4.1.1.1.4\">Sequence</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.4.1.1.1.5\">Localization</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.4.1.1.1.6\">Compositionality</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.4.1.1.1.7\">Type</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T5.4.1.1.1.8\">Avg.</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.4.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T5.4.1.2.1.1\">CLIP-text\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib42\" title=\"\">42</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.1.2.1.2\">0.963</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.1.2.1.3\">0.964</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.1.2.1.4\">0.975</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.1.2.1.5\">0.965</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.1.2.1.6\">0.970</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.1.2.1.7\">0.912</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T5.4.1.2.1.8\">0.958</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.4.1.3.2.1\">Sentence-BERT\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib43\" title=\"\">43</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.3.2.2\">0.890</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.3.2.3\">0.940</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.3.2.4\">0.970</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.3.2.5\">0.916</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.3.2.6\">0.939</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.3.2.7\">0.704</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T5.4.1.3.2.8\">0.893</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.4.1.4.3.1\">CLIP4Clip\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib35\" title=\"\">35</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.4.3.2\">0.941</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.4.3.3\">0.939</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.4.3.4\">0.969</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.4.3.5\">0.932</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.4.3.6\">0.947</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.1.4.3.7\">0.828</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T5.4.1.4.3.8\">0.926</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T5.4.1.5.4.1\">CLIP4Clip-temporal\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib35\" title=\"\">35</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.4.1.5.4.2\">0.946</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.4.1.5.4.3\">0.946</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.4.1.5.4.4\">0.971</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.4.1.5.4.5\">0.939</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.4.1.5.4.6\">0.953</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T5.4.1.5.4.7\">0.847</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T5.4.1.5.4.8\">0.934</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 92 |
+
"capture": "Table 5: Average cosine similarity between the representations of original captions and counterfactual descriptions produced by different text encoders"
|
| 93 |
+
},
|
| 94 |
+
"6": {
|
| 95 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T6.2.1.1\" style=\"font-size:90%;\">Table 6</span>: </span><span class=\"ltx_text\" id=\"S4.T6.3.2\" style=\"font-size:90%;\">Accuracy (%) of VIOLET, Singularity, and X-Pool fine-tuned on different video-text datasets</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T6.4\" style=\"width:433.6pt;height:89.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-175.1pt,36.2pt) scale(0.553159043629239,0.553159043629239) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T6.4.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T6.4.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T6.4.1.1.1.1\">Model</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.4.1.1.1.2\">Dataset</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.4.1.1.1.3\">Direction</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.4.1.1.1.4\">Intensity</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.4.1.1.1.5\">Sequence</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.4.1.1.1.6\">Localization</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.4.1.1.1.7\">Compositionality</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T6.4.1.1.1.8\">Type</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_tt\" id=\"S4.T6.4.1.1.1.9\">Avg.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.4.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T6.4.1.2.2.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T6.4.1.2.2.1.1\">VIOLET\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib12\" title=\"\">12</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.2.2.2\">DiDeMo\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib19\" title=\"\">19</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.2.2.3\">50.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.2.2.4\">59.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.2.2.5\">55.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.2.2.6.1\">61.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.2.2.7\">64.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.2.2.8\">77.7</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.2.2.9\">61.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.4.1.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.3.3.1\">LSMDC\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib46\" title=\"\">46</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.3.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.3.3.2.1\">60.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.3.3.3.1\">62.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.3.3.4\">61.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.3.3.5\">60.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.3.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.3.3.6.1\">64.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.3.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.3.3.7.1\">78.2</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T6.4.1.3.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.3.3.8.1\">64.7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.4.1.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.4.4.1\">YouCook2\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib68\" title=\"\">68</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.4.4.2\">58.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.4.4.3\">60.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.4.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.4.4.4.1\">62.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.4.4.5\">61.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.4.4.6\">61.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.4.4.7\">76.8</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T6.4.1.4.4.8\">63.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.4.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T6.4.1.5.5.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T6.4.1.5.5.1.1\">Singularity\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib24\" title=\"\">24</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.5.5.2\">ActivityNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib23\" title=\"\">23</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.5.5.3\">54.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.5.5.4\">64.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.5.5.5\">50.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.5.5.6\">64.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.5.5.7\">61.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.5.5.8\">76.0</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.5.5.9\">61.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.4.1.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.6.6.1\">DiDeMo\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib19\" title=\"\">19</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.6.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.6.6.2.1\">57.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.6.6.3.1\">65.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.6.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.6.6.4.1\">53.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.6.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.6.6.5.1\">67.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.6.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.6.6.6.1\">64.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.6.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.6.6.7.1\">76.9</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T6.4.1.6.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.6.6.8.1\">64.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.4.1.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.7.7.1\">SSv2-label\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib18\" title=\"\">18</a>, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib24\" title=\"\">24</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.7.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.7.7.2.1\">57.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.7.7.3\">65.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.7.7.4\">49.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.7.7.5\">63.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.7.7.6\">59.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.4.1.7.7.7\">75.2</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T6.4.1.7.7.8\">61.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.4.1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T6.4.1.8.8.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T6.4.1.8.8.1.1\">X-Pool\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib17\" title=\"\">17</a>]</cite></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.8.8.2\">LSMDC\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib46\" title=\"\">46</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.8.8.3\">60.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.8.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.8.8.4.1\">69.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.8.8.5\">50.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.8.8.6\">66.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.8.8.7\">59.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.8.8.8\">77.1</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T6.4.1.8.8.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.8.8.9.1\">63.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.4.1.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.4.1.9.9.1\">MSVD\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.17404v2#bib.bib6\" title=\"\">6</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.4.1.9.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.9.9.2.1\">64.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.4.1.9.9.3\">57.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.4.1.9.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.9.9.4.1\">51.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.4.1.9.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.9.9.5.1\">68.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.4.1.9.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.9.9.6.1\">62.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.4.1.9.9.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.9.9.7.1\">78.8</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T6.4.1.9.9.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.4.1.9.9.8.1\">63.8</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 96 |
+
"capture": "Table 6: Accuracy (%) of VIOLET, Singularity, and X-Pool fine-tuned on different video-text datasets"
|
| 97 |
+
},
|
| 98 |
+
"7": {
|
| 99 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T7\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T7.2.1.1\" style=\"font-size:90%;\">Table 7</span>: </span><span class=\"ltx_text\" id=\"S4.T7.3.2\" style=\"font-size:90%;\">Recall@10 of ALPRO and CLIP4Clip on video-to-text retrieval on VITATECS</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T7.4\" style=\"width:433.6pt;height:96.2pt;vertical-align:-0.8pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-69.5pt,15.3pt) scale(0.757376827841231,0.757376827841231) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T7.4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T7.4.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T7.4.1.1.1.1\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T7.4.1.1.1.2\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T7.4.1.1.1.3\">Description</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T7.4.1.1.1.4\">Direction</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T7.4.1.1.1.5\">Intensity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T7.4.1.1.1.6\">Sequence</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T7.4.1.1.1.7\">Localization</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T7.4.1.1.1.8\">Compositionality</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T7.4.1.1.1.9\">Type</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T7.4.1.1.1.10\">Avg.</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T7.4.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T7.4.1.2.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T7.4.1.2.1.1.1\">ALPRO</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.2.1.2\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T7.4.1.2.1.2.1\">Zero-shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.2.1.3\">Caption</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.2.1.4\">28.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.2.1.5\">48.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.2.1.6\">70.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.2.1.7\">43.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.2.1.8\">34.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.2.1.9\">22.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.2.1.10\">41.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.4.1.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.3.2.1\">Counterfactual</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.3.2.2\">28.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.3.2.3\">42.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.3.2.4\">70.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.3.2.5\">38.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.3.2.6\">32.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.3.2.7\">12.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.3.2.8\">37.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.4.1.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.4.3.1\">All</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.4.3.2\">28.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.4.3.3\">45.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.4.3.4\">70.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.4.3.5\">41.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.4.3.6\">33.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.4.3.7\">17.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.4.3.8\">39.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.4.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T7.4.1.5.4.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T7.4.1.5.4.1.1\">CLIP4Clip</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T7.4.1.5.4.2\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T7.4.1.5.4.2.1\">MSR-VTT</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.5.4.3\">Caption</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.5.4.4\">53.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.5.4.5\">73.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.5.4.6\">90.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.5.4.7\">69.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.5.4.8\">65.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.5.4.9\">48.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.4.1.5.4.10\">66.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.4.1.6.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.6.5.1\">Counterfactual</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.6.5.2\">47.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.6.5.3\">66.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.6.5.4\">88.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.6.5.5\">60.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.6.5.6\">60.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.6.5.7\">22.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.4.1.6.5.8\">57.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.4.1.7.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.4.1.7.6.1\">All</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.4.1.7.6.2\">50.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.4.1.7.6.3\">69.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.4.1.7.6.4\">89.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.4.1.7.6.5\">64.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.4.1.7.6.6\">62.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.4.1.7.6.7\">35.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.4.1.7.6.8\">62.2</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 100 |
+
"capture": "Table 7: Recall@10 of ALPRO and CLIP4Clip on video-to-text retrieval on VITATECS"
|
| 101 |
+
},
|
| 102 |
+
"8": {
|
| 103 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T8\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T8.2.1.1\" style=\"font-size:90%;\">Table 8</span>: </span><span class=\"ltx_text\" id=\"S4.T8.3.2\" style=\"font-size:90%;\">Accuracy (%) of X-CLIP on VITATECS and other counterfactual construction strategies</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T8.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T8.4.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T8.4.1.1.1\">POS of replaced words</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S4.T8.4.1.1.2\">All</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T8.4.1.1.3\">Noun</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T8.4.1.1.4\">Verb</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T8.4.1.1.5\">Adjective</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.4.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row\" id=\"S4.T8.4.2.2.1\"># replaced words</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T8.4.2.2.2\">1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T8.4.2.2.3\">2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T8.4.2.2.4\">3</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T8.4.2.2.5\">1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T8.4.2.2.6\">1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T8.4.2.2.7\">1</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T8.4.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T8.4.3.1.1\">Random</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.3.1.2\">74.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.3.1.3\">82.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.3.1.4\">90.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.3.1.5\">82.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.3.1.6\">67.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.4.3.1.7\">71.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.4.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T8.4.4.2.1\">Synonym</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.4.4.2.2\">64.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.4.4.2.3\">77.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.4.4.2.4\">83.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.4.4.2.5\">72.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.4.4.2.6\">64.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.4.4.2.7\">67.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.4.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T8.4.5.3.1\">VITATECS (subset)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" colspan=\"6\" id=\"S4.T8.4.5.3.2\">64.3</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 104 |
+
"capture": "Table 8: Accuracy (%) of X-CLIP on VITATECS and other counterfactual construction strategies"
|
| 105 |
+
}
|
| 106 |
+
},
|
| 107 |
+
"image_paths": {
|
| 108 |
+
"1": {
|
| 109 |
+
"figure_path": "2311.17404v2_figure_1.png",
|
| 110 |
+
"caption": "Figure 1: Illustration of the gap between current training and evaluation procedures and real-world applications. In current video-language datasets (blue box), temporal information is highly correlated with static scenes. Models trained and evaluated on them cannot acquire the ability to understand temporal concepts, leading to failure in challenging real-world applications (red box).",
|
| 111 |
+
"url": "http://arxiv.org/html/2311.17404v2/extracted/5870154/main/figures/gap3.png"
|
| 112 |
+
},
|
| 113 |
+
"2": {
|
| 114 |
+
"figure_path": "2311.17404v2_figure_2.png",
|
| 115 |
+
"caption": "Figure 2: Examples from the six aspects of our dataset. Each sample contains a video, a ground-truth caption, and a counterfactual description with modifications in the given temporal aspect. Differences between the sentence pairs are highlighted in blue and red.",
|
| 116 |
+
"url": "http://arxiv.org/html/2311.17404v2/extracted/5870154/main/figures/example.png"
|
| 117 |
+
},
|
| 118 |
+
"3": {
|
| 119 |
+
"figure_path": "2311.17404v2_figure_3.png",
|
| 120 |
+
"caption": "Figure 3: Illustration of our human-in-the-loop annotation framework. Texts in orange indicate the labels predicted by the filter model. Texts in red, blue and purple indicate candidates that are eliminated by the NLI model, the filter model and human annotators, respectively.",
|
| 121 |
+
"url": "http://arxiv.org/html/2311.17404v2/extracted/5870154/main/figures/flowchart.png"
|
| 122 |
+
}
|
| 123 |
+
},
|
| 124 |
+
"validation": true,
|
| 125 |
+
"references": [],
|
| 126 |
+
"url": "http://arxiv.org/html/2311.17404v2"
|
| 127 |
+
}
|
20240921/2401.08326v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2402.04648v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2402.12875v4.json
ADDED
|
@@ -0,0 +1,628 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Chain of Thought Empowers Transformers to Solve Inherently Serial Problems",
|
| 3 |
+
"abstract": "Instructing the model to generate a sequence of intermediate steps, a.k.a., a chain of thought (CoT), is a highly effective method to improve the accuracy of large language models (LLMs) on arithmetics and symbolic reasoning tasks. However, the mechanism behind CoT remains unclear.\nThis work provides a theoretical understanding of the power of CoT for decoder-only transformers through the lens of expressiveness. Conceptually, CoT empowers the model with the ability to perform inherently serial computation, which is otherwise lacking in transformers, especially when depth is low. Given input length , previous works have shown that constant-depth transformers with finite precision embedding size can only solve problems in without CoT. We first show an even tighter expressiveness upper bound for constant-depth transformers with constant-bit precision, which can only solve problems in , a proper subset of . However, with steps of CoT, constant-depth transformers using constant-bit precision and embedding size can solve any problem solvable by boolean circuits of size . Empirically, enabling CoT dramatically improves the accuracy for tasks that are hard for parallel computation, including the composition of permutation groups, iterated squaring, and circuit value problems, especially for low-depth transformers.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Large Language Models (LLMs) exhibit exceptional capabilities in complex reasoning tasks such as mathematical problem-solving and code generation (Chowdhery et al., 2023 ###reference_b7###; Anil et al., 2023 ###reference_b2###; Achiam et al., 2023 ###reference_b1###; Romera-Paredes et al., 2023 ###reference_b41###; Trinh et al., 2024 ###reference_b44###), far surpassing standard supervised machine learning techniques. The key to unlocking these advanced reasoning abilities lies in enabling LLMs to generate intermediate steps, or a chain of thought (CoT), before finalizing the final answer. This can be achieved through various methods, including training or instruction tuning a model with examples enriched with intermediate steps (Ling et al., 2017 ###reference_b23###; Cobbe et al., 2021 ###reference_b10###; Nye et al., 2021 ###reference_b34###; Chung et al., 2022 ###reference_b8###), or through few-shot CoT prompting (Reynolds & McDonell, 2021 ###reference_b39###; Nye et al., 2021 ###reference_b34###; Wei et al., 2022 ###reference_b48###).\nA natural explanation is that the intermediate steps provide extra information about the tasks and efficient approaches to solving, so\nthat a model can imitate. However, intriguingly, the efficacy of generating thought steps extends to zero-shot CoT prompting (Kojima et al., 2022 ###reference_b21###), where LLMs are only instructed with the prompt \u201clet\u2019s think step by step\u201d, and to even using incorrect reasoning steps in the few-shot examples (Wang et al., 2022a ###reference_b46###; Madaan & Yazdanbakhsh, 2022 ###reference_b27###). These observations suggest that\nthe form of CoT prompting is as important as (if not more important than) its content, because merely instructing LLMs to generate the intermediate steps helps.\nThis paper aims to study why the form of CoT improves the reasoning capability of LLMs. Our hypothesis is that CoT allows for performing more serial computations that a vanilla transformer cannot do without CoT. We formulate and analyze this hypothesis through the lens of expressiveness with and without CoT. We adopt the language of circuit complexity to discuss the capability of transformers. Previous works (Liu et al., 2022b ###reference_b25###; Merrill & Sabharwal, 2023b ###reference_b31###) have shown standard decoder-only transformers (that output answers directly) are efficient parallel computers and can only express functions computable in an -parallel run-time with threshold circuits, , a computational model that allows the , , and function with multiple inputs to be computed efficiently in parallel.\nWe first show a tighter upper bound (Theorem 3.1 ###reference_theorem1###) for expressiveness of constant-precision transformer \u2013 it can only express a proper subset class of , , where gates are not allowed. Our upper bound is also more realistic because it handles the rounding issue or iterative addition of floating point numbers, while most previous results essentially only work for fixed-point number addition.\nWe then show that transformers equipped with CoT\u2014allowing the transformer to auto-regressively generate a sequence of intermediate tokens before answering the questions\u2014can solve complex problems that inherently require serial computations (assuming well-known conjectures in complexity theory). Intuitively, without CoT, the number of serial computations conducted by the transformer is bounded by the depth (which is considered as a fixed constant for this work), whereas with intermediate steps, the number of serial computations possible is boosted to . Note that can easily increase as the sequence length increases where the depth is a fixed number that depends on the architecture.\nConcretely, we prove that a constant-precision transformer with intermediate steps and embedding dimension logarithmic in the sequence length can express any functions computable by a circuit of size in Theorem 3.3 ###reference_theorem3###. Taking to be polynomial in the sequence length, the result suggests that transformers with polynomially many intermediate steps are capable of computing all circuits in with polynomial size, , a superclass of P. Theorem 3.3 ###reference_theorem3### also implies that transformers with linearly many intermediate steps can compute all regular languages, including composition of non-solvable groups, like permutation group over five elements, , which does not belong to and is also widely conjectured to be out of . As such, polynomially many CoT steps makes transformers with bounded depth and precision strictly more powerful. We define the problem class that transformers can solve with a certain amount of CoT steps formally in Definition 3.4 ###reference_definition4### and summarize our theoretical results in Figure 1 ###reference_###. Interestingly, we also show that logarithmically many CoT steps do not allow the transformer to compute functions beyond . (Theorem 3.1 ###reference_theorem1###)\n###figure_1### To corroborate our theoretical analysis, we empirically evaluate the capability of transformers in solving four core problems: modular addition, permutation composition, iterated squaring, and circuit value problem.\nWe learn transformers to solve these tasks with a large amount of synthetic data, with and without CoT, or with additional hint but not CoT. The modular addition belongs to , meaning it can be easily solved in parallel. Liu et al. (2022a ###reference_b24###) shows it is solvable by constant-depth transformers with log-precision and, indeed empirically depth 1 is sufficient for the parity problem (Modulo 2 addition). The other three tasks are all conjectured to require inherently serial computations. As expected, the vanilla transformer either requires a huge depth to solve these tasks (because the depth is the upper bound on the number of serial computation by transformers), or cannot solve the tasks at all. On the other hand, CoT can solve these tasks as long as the depth exceeds a small threshold. These experiments demonstrate CoT can provide more serial computations to solve complex reasoning tasks."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Notations and Preliminaries",
|
| 15 |
+
"text": "We use and to denote the set of natural numbers and real numbers respectively. For any , we define . We define . For vector , we use to denote the vector containing coordinates of from position to position . For matrix , we define to denote the submatrix by selecting rows from to , columns from to . We also use to denote the subset of indices from to the end, to denote the subset of indices from the beginning (1) to and to denote all indices.\nGiven two non-negative functions , we say (resp. ) iff there exists , such that for all , (resp. ). We use to denote the set of functions with at most polynomial growth rate.\nWe use to denote the value of binary number represented by binary string .\nWe use to denote the usual binary encoding of natural number using binary bits in the sense that and to denote the signed binary encoding, which is . For any , we define as for any and . We use to denote the element-wise product of two vectors. We use or to denote the concatenation of two vectors and ."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Decoder-only Transformers",
|
| 21 |
+
"text": "Given a vocabulary , a decoder-only transformer with parameter and maximal input length maps a sequence of input tokens to a probability distribution over for all , denoted by . We also define function by the token in that maximizes , that is, .\nGiven a vocabulary , a next-token generator with parameter and maximal input length is a mapping from to . The main next-token generator we are interested in this work is decoder-only transformers, where for all . We also recursively define , for every positive integer and satisfying that with the base case that . In other words, for all , the output with steps of CoT is\n.\nThe decoder-only transformer model we consider in this paper is very similar to GPT style architectures (Radford et al., 2019 ###reference_b38###) and consists of four parts: a token embedding layer (), a position encoding layer (), an output linear layer (), and a stack of identical layers serving as the \u201cdecoder\u201d where is also called the depth of the model. Each decoder layer has two sub-layers: a multi-head self-attention layer () and a position-wise fully-connected feed-forward network (). Each layer mentioned above has its own trainable parameters and is indexed by the layer name and the depth for attention and feedforward layers.\n111We ignore the LayerNorm (Ba et al., 2016 ###reference_b3###) in the usual transformer architecture for simplicity. Our expressiveness analysis can extend to the transformers with LayerNorm with more careful treatment. See Section F.1 ###reference_### for discussion.\nThat is we can split the model parameter in the following way: , which are all trainable. (See formal definition in Algorithm 2 ###reference_###). Throughout this paper, we use to denote the embedding size of a transformer.\nGiven attention parameter , we define the Attention layer with mask for decoder-only transformer in Algorithm 3 ###reference_###. Note allowing multi-head attention will not change the class of problems solvable by constant layer decoder-only transformers as we can simulate 1 multi-head attention layer with any constantly many heads with multiple single-head attention layers. Thus for simplicity of presentation, we do not include multi-head attention in the definition below.\nGiven the parameter of fully-connected feedforward network layer , we define the fully-connected feedforward layer as .\nGiven the parameter of token embedding layer , we define the token embedding layer by viewing as a mapping from to , that is, for all , the token embedding is .\nGiven the parameter of position encoding layer , we define the token embedding layer by viewing as a mapping from to that is, for all , the position embedding is as .\nGiven the parameter of output layer , we define the output layer as for all ."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Circuit Complexity",
|
| 27 |
+
"text": "In this paper we consider the following notion of problems: given a sequence of input tokens, output a token as the answer. Mathematically, given a vocabulary , we call a mapping a problem. If the correct answer is always or , we call a decision problem. In circuit complexity, such is also called a language.\nThough the standard definition of circuit complexity only deals with binary strings, given any finite vocabulary , we can always replace each token in by its binary representation, and the length of the input only blows up by a constant factor. Therefore we can extend existing complexity classes listed to arbitrary finite vocabulary naturally.\nThe class contains all problems solvable by a deterministic Turing machine in polynomial time.\nA Boolean circuit over variables is a directed acyclic graph where nodes are , , or gates. The gates with in-degree 0 are the inputs, which are assigned one of the boolean variables. Given the inputs, the circuit computes the value of each non-input gate based on the value of the incoming gates and outputs a number at the output gate.\nGiven any function , denotes the class of problems that can be solved by boolean circuits with gates when the input length is . Formally, a problem is in if and only if there exists a sequence of circuits such that each circuit has inputs and 1 output, the size of each circuit is at most , and for all strings , is in if and only if .\nWe define the class as the set of problems that can be solved by a family of polynomial-size circuits, that is, . Since any Turing Machine with time bound can be simulated by a circuit of size (Pippenger & Fischer, 1979 ###reference_b37###), we know that .\nThe class contains all problems that can be solved in a small parallel runtime\u2014polylogarithmic in input length\u2014and with a polynomial number of processors. Formally, for a positive integer , a problem is in if and only if there exists a polynomial and a family of circuits such that each circuit has inputs and 1 output, the fan-in of the gates is at most , the size of each circuit is at most , the depth of each circuit is , and for all strings , is in if and only if . Finally we define .\nThe class is defined almost the same as for each , except the and gates in allow unbounded fan-in.\nThe class allows a more powerful type of gate, , compared to . gate can have unbounded fan-in and is defined as\n.\nIt holds that for all natural number . Therefore , which all stands for the problem class that can be solved in polylogarithmic time with polynomial parallel processors."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Expressiveness Theory for Transformers with Chain of Thought(CoT)",
|
| 33 |
+
"text": "In this section, we study the expressiveness of transformers with CoT from a theoretical perspective."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Finite Precision Modeling",
|
| 39 |
+
"text": "In practice, training and inference of transformers are typically done with 16- or 32-bit floating point numbers. Thus in this paper, we mainly focus on the computation model of constant-precision transformers, where the output of each arithmetic operation is rounded to the closest floating point number representable by a fixed number of digits following IEEE 754 standard (Definition 3.2 ###reference_definition2###), thus avoiding the unrealistic infinite precision assumption made by prior works (P\u00e9rez et al., 2019 ###reference_b35###; Dehghani et al., 2018 ###reference_b11###).\nBelow we give a formal definition of the floating-point number and rounding operation. Recall denote the value of binary number represented by for any .\nLet be the number of bits for exponents and be the number of bits for significand. A -bit binary string is a floating-point binary representation of number with -bit exponent and -precision, where the sign is , the significand is , and the exponent is . We further use to denote all the floating numbers representable using -bit exponent and -bit precision (significand), that is, . We define .\nWe also use to denote the inverse of . We note that when the number of exponent bits is larger than , there are multiple ways to represent a number in by a binary string and we assign as the string with the smallest , which is unique for all non-zero numbers. For we additionally set .\nFor any and any closed subset of containing , , we define correct rounding as the closest number to in . We break the tie by picking the one with a smaller absolute value.\nIn particular, we denote the rounding operation with -bit exponent, -bit precision by , which is also denoted by for convenience. We extend the definition of and to vector inputs by rounding coordinate-wisely.\nOur notion of floating-point number simplifies the IEEE 754 Standard for Floating-point Arithmetic (IEEE, 2008 ###reference_b19###) by removing and . When overflow happens, we always round the output to the (negative) largest representable number in . For unary functions like and binary functions including addition, subtraction, multiplication, and division, we simply define their rounded version by rounding their outputs. Whenever division by happens, we treat it as the model outputs the wrong result.\nNext, we define finite-precision summation over more two numbers by decomposing it as a chain of rounded binary addition in a fixed order. 222Technically speaking, instead of a chain, the summation could also proceed like a tree. This is a more complicated case and we leave it for future work.\nFor any and vector , we define summation with iterative rounding to bit exponent and -bit precision as , where for any and ,\nWe further define the following operations:\nFinite-precision inner product: ;\nFinite-precision matrix product: ;\nFinite-precision softmax: .\nFinally, a finite-precision transformer can be defined by replacing all the infinite-precision operations by their finite-precision counterparts listed above. (See details in Algorithm 4 ###reference_###). We postpone the details of the finite-precision version of individual transformer layers into Appendix B ###reference_###."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": ": Complexity Class for Constant-depth Transformers with CoT",
|
| 45 |
+
"text": "In this subsection, we define the complexity class consisting of all the problems that can be solved by some decoder-only transformers with with finite precision.\nGiven a finite vocabulary and four functions , informally, is the family of problems solvable by a transformer with a constant depth, bits of precision, bits of exponent, embedding size and steps of CoT. Formally, we say a problem is in iff there is an integer and three functions , , such that for every positive integer , there is a -layer decoder-only transformer, denoted by with embedding size , bits of precision, and bits of exponent, that can output given any input in , using steps of chain of thought. Mathematically, it means\nWe also extend the definition of to a class of function instead of a single function. For example, .\nWe define as the problems that a constant-depth, constant-precision decoder-only transformer can solve with bits of precision, bits of exponent, embedding size and without CoT (or with only step of CoT).\nBy definition, is monotone in all , e.g., if for all . In particular, we have .\nNote the above-defined complexity class is non-uniform, that is, it allows a different program for every input size. This is in contrast to previous works (P\u00e9rez et al., 2019 ###reference_b35###, 2021 ###reference_b36###; Yao et al., 2021 ###reference_b53###; Weiss et al., 2021 ###reference_b49###; Chiang et al., 2023 ###reference_b6###; Hao et al., 2022 ###reference_b17###; Merrill & Sabharwal, 2023a ###reference_b30###; Merrill et al., 2022 ###reference_b33###) which focus on the uniform transformer classes. Please refer to Appendix G ###reference_### for a discussion."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Tighter Upper Bounds on Transformer Expressiveness",
|
| 51 |
+
"text": "Existing works (Merrill & Sabharwal, 2023b ###reference_b31###; Liu et al., 2022a ###reference_b24###) have shown that constant depth, polynomial width, and log precision transformers can be simulated in a small parallel time, i.e., using circuits. These results are built on the fact that multiplication and division of -bits binary numbers (Hesse, 2001 ###reference_b18###), as well as the iterated addition over different -bit binary integers are in .\nHowever, such expressiveness upper bounds may be unrealistic for transformers operating with floating point numbers. (Merrill & Sabharwal, 2023b ###reference_b31###; Liu et al., 2022a ###reference_b24###) implicitly assumes when adding more than one floating-point number, the algorithm first computes the exact answer without rounding using arbitrarily more precision and only performs rounding in the end. However, in practice rounding happens after each addition between two numbers and it is open if such upper bounds still holds. Immediate rounding makes iterated addition over floating point numbers no longer associative (Goldberg, 1991 ###reference_b15###), for example, . The associativity of integer addition plays a crucial role in the fact that the iterated addition over different -bit binary integers is in .\nIn this section, we present two novel expressiveness upper bounds for transformers which round the immediate result after each step of the arithmetic operation. First, we show a strictly tighter upper bound than , which is , for constant-depth transformers with both constant bits of precision and exponents. (Theorem 3.1 ###reference_theorem1###) This suggests when input length is sufficiently long, constant-precision transformers cannot count eventually, even in the sense of modular. For example, it is well known that no circuits can decide the parity of a binary string.\n.\nOur second result, Theorem 3.2 ###reference_theorem2###, shows that when the number of bits for the exponent is (i.e. fixed-point numbers), upper bounds for the expressiveness of constant-depth, log-precision transformers still holds, even with the correct rounding defined in Definition 3.2 ###reference_definition2###.\n.\nWe note that the fact that a single forward pass of the transformer can be simulated by an circuit immediately implies that transformer output with steps of CoT can also be simulated by . This is because in general one can the transformer output with steps of CoT as an of disjoint subcircuits, where each of them enumerates all possible values of CoT tokens and output the value of the token in the branch where all the intermediate token values are consistent. This enumeration can be done in parallel and thus only takes constant depth. When , this only leads factor of explosion in circuit size and thus still in . The same argument holds for as well.\nThe main technical difficulties in above two results are showing has (resp. ) circuits when are both constants (resp. , ). We view iterated addition with rounding over as an automaton with both state space and vocabulary being .\nThe first result are due to a novel application of classical Krhon-Rhodes decomposition theorem for automata (Theorem C.2 ###reference_theorem2###), where we use the property of rounded addition that for all , . We formalize this property in Definition D.2 ###reference_definition2### as ordered automata and show all ordered automata are counter-free Theorem D.3 ###reference_theorem3### and thus can be simulated by circuits (McNaughton & Papert, 1971 ###reference_b29###).\nThe proof technique for Theorem 3.1 ###reference_theorem1### does not generalize to Theorem 3.2 ###reference_theorem2### because the depth of circuits constructed before depends on the number of the states of the automaton and thus is not constant. Our proof for Theorem 3.2 ###reference_theorem2### is motivated by Algorithm 1 in Liu et al. (2022a ###reference_b24###) for the automaton named \u2018GridWorld\u2019.\nHowever, it remains open whether constant-depth, log-precision transformers with log bits for exponents or even constant bits for exponents have circuits."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.4",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "CoT Makes Transformers More Expressive",
|
| 57 |
+
"text": "Now we are ready to present our main theoretical results (Theorem 3.3 ###reference_theorem3###) which characterize the expressiveness of constant-depth, constant-precision transformers with CoT and embedding size. embedding sizes are necessary to ensure that the position embeddings for inputs are different. All the lower bounds for transformer expressiveness (with or without CoT) are proved for fixed-point numbers, i.e., without using any exponent bits. Allowing exponent bits will only make transformers more expressive. For convenience, we define . The omitted proofs in this section can be found in Appendix E ###reference_###.\n###figure_2### ###figure_3### ###figure_4### For any polynomial function , . In particular, .\nCompared to Theorems 3.1 ###reference_theorem1### and 3.2 ###reference_theorem2###, Theorem 3.3 ###reference_theorem3### shows that allowing polynomial steps of CoT strictly makes constant-depth, constant-precision, decoder-only transformer more expressive and log-precision transformers more expressive under a standard hardness assumption that .333Indeed such separation can be shown for any polynomial steps of CoT by padding polynomially many tokens to input.\nThe high-level proof idea is that we use each step in CoT to simulate one gate operation in the target circuit and write the gate output as next input. To do that, we use one position encoding to store the information for each gate, which contains four parts: the current gate id, the next gate type , and the two input gates id of the next gate. Since there are total gates, embedding size suffices to store the above information. The CoT here is constructed to be the values of each gate in the increasing order of id. Therefore, in each step, we can use attention to pull the value (either computed already or it is input) of the two input gates and use a feedforward network to compute the value of the current gate. The proof idea is illustrated in Figure 2 ###reference_###.\n\u220e\nAs we can see from the proof sketch, a crucial step for CoT to simulate any depth circuit is to write the output token back to the next input position. This action resets the \u201cdepth\u201d of the intermediate output in the circuit to . Our theory explains the ablation experiment in Wei et al. (2022 ###reference_b48###) that when the model is prompted to output a only sequence of dots (. . .) equal to the number of tokens needed to solve the problem, the performance is no better than directly outputting the answer.\nBecause every regular language can be recognized by a finite state automaton (Definition C.1 ###reference_definition1###) and finite state automata can clearly be simulated by linear size circuits. The following holds as a direct corollary of Theorem 3.3 ###reference_theorem3###\nEvery regular language belongs to .\nBelow we give a concrete regular language that constant-depth, poly-embedding-size transformers can solve only with CoT, the wording problem of permutation group over elements, in Theorem 3.5 ###reference_theorem5###, under a standard hardness assumption that (Yao, 1989 ###reference_b52###).\nGiven elements from , , we use to denote the decision problem of whether is equal to the identity of .\nFor convenience, in this paper, we extend the domain of to the sequence of groups encoded by binary strings. The proof of Theorem 3.5 ###reference_theorem5### is a direct consequence of Theorems 3.3 ###reference_theorem3###, 3.2 ###reference_theorem2### and 3.6 ###reference_theorem6###.\nAssuming , the wording problem of , is in but not .\n\u200b\u200bThe wording problem of is -complete under reductions. That is, for any decision problem in , there is a family of circuits (constant depth, fan-outs), such that for any and ,\nFirst is a regular language, thus belonging to by Corollary 3.4 ###reference_theorem4###. Since is -complete by Theorem 3.6 ###reference_theorem6###, assuming , does not belong to . This proof is completed by applying Theorem 3.2 ###reference_theorem2###, which says .\n\u220e\n###figure_5### So far we have been focusing on the expressiveness of transformer with embedding size,\nso it is natural to ask whether transformers can also benefit from having a larger embedding size, say ? Our Theorem 3.7 ###reference_theorem7### answers this question positively by showing that log-precision (resp. constant-precision) constant-depth poly-embedding-size decoder-only transformers with steps of CoT can simulate any -size circuit with some (resp. ) oracle gates with input.\nFormally, given a decision problem , we use to denote the restriction of on , which can also be viewed as an single gate with fan-ins. We define problems that can be solved by circuits with certain sizes of gates (including oracle gates) by Definition 3.7 ###reference_definition7###. 444Our definition of complexity class solvable by circuits with oracle is slightly different from that in literature (Wilson, 1985 ###reference_b50###), where the size of the oracle circuits refers to the number of wires, whereas ours refers to the number of gates.\nFor any decision problem and , we define as the set of decision problems such that there exists and circuits where contains at most , , , and gates.\nFor a complexity class , we define .\nFor any , it holds that .\nSpecifically, for , we have .\nFor any , it holds that .\nSpecifically, for , we have .\nTheorem 3.8 ###reference_theorem8### shows that for steps of CoT, using embedding size does not improve expressiveness over using embedding size (Theorem 3.3 ###reference_theorem3###), because . However, Theorem 3.9 ###reference_theorem9### shows that for any specific polynomial steps of CoT, increasing embedding width from to make transformers strictly more powerful.\nFor any , and for all , .\n###figure_6###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "CoT Empirically Improves Expressiveness of Low-Depth Transformers on Inherently Serial Problems",
|
| 63 |
+
"text": "This section is an empirical study of the expressiveness of decoder-only transformers with CoT on four different arithmetic problems: modular addition, permutation composition (), iterated squaring, and circuit value problem. The first problem is parallelizable and can be solved by constant-depth transformers with log-precision while the latter three are inherently serial under some standard hardness assumptions in computational complexity or cryptography. As a prediction of our theory, we expect to see a huge improvement in accuracy when CoT is turned on."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Related Works",
|
| 69 |
+
"text": "Despite the numerous empirical achievements, unanswered questions concerning the inner workings of neural networks capable of algorithmic reasoning. The ability of self-attention to create low-complexity circuits has been recognized (Edelman et al., 2022 ###reference_b12###; Hahn, 2020 ###reference_b16###; Merrill et al., 2021 ###reference_b32###), as well as its capacity to form declarative programs (Weiss et al., 2021 ###reference_b49###), and Turing machines (Dehghani et al., 2018 ###reference_b11###; Giannou et al., 2023 ###reference_b14###; P\u00e9rez et al., 2021 ###reference_b36###). Moreover, it has been demonstrated that interpretable symbolic computations can be drawn from trained models (Clark et al., 2019 ###reference_b9###; Tenney et al., 2019 ###reference_b42###; Vig, 2019 ###reference_b45###; Wang et al., 2022b ###reference_b47###).\nLiu et al. (2022a ###reference_b24###) is a closely related work to ours, which studies the expressiveness of low-depth transformers for semi-automata. Their setting corresponds to using only 1 step of CoT and our contribution is to show that allowing more steps of CoT enables the transformers to solve more difficult problems than semi-automata, especially those inherently serial problems, like the circuit value problem, which is -complete."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "6",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Conclusion",
|
| 75 |
+
"text": "We study the capability of CoT for decoder-only transformers through the lens of expressiveness. We adopt the language of circuit complexity and define a new complexity class which corresponds to a problem class solvable by constant-depth, constant-precision decoder-only transformers with steps of CoT, embedding size and floating-point numbers with bits of exponents and bits of significand. Our theory suggests that increasing the length of CoT can drastically make transformers more expressive. We also empirically verify our theory in four arithmetic problems. We find that for those three inherently serial problems, transformers can only express the groundtruth function by using CoT."
|
| 76 |
+
}
|
| 77 |
+
],
|
| 78 |
+
"appendix": [
|
| 79 |
+
{
|
| 80 |
+
"section_id": "Appendix 1",
|
| 81 |
+
"parent_section_id": null,
|
| 82 |
+
"section_name": "Appendix A Additional Experimental Results",
|
| 83 |
+
"text": "In this section present the experimental results for base setting which is omitted in the main paper and the details of training and each task. We use the nanogpt666https://github.com/karpathy/nanoGPT ###reference_### codebase for language modeling."
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"section_id": "Appendix 2",
|
| 87 |
+
"parent_section_id": null,
|
| 88 |
+
"section_name": "Appendix B Details on Finite-Precision Layers",
|
| 89 |
+
"text": "In this section, we give the definition of the finite-precision version of different transformer layers. Recall that given , the numbers representable using -bit significand and -bit exponent is ."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"section_id": "Appendix 3",
|
| 93 |
+
"parent_section_id": null,
|
| 94 |
+
"section_name": "Appendix C Preliminary of Automata and Krohn-Rhodes Decomposition Theorem",
|
| 95 |
+
"text": "In this section we recap the basic notations and definitions for automata theory and Krohn-Rhodes Decomposition Theorem (Krohn & Rhodes, 1965 ###reference_b22###), following the notation and presentation of Maler (2010 ###reference_b28###).\nA deterministic automaton is triple where\n is a finite set of symbols called the input alphabet, is a finite set of states and\n is the transition function.\nThe transition function can be lifted naturally to input sequences, by letting for all recursively.\nAn automaton can be made an acceptor by choosing an initial state and a set of accepting states . As such it accepts/recognizes a set of sequences, also known as a language, defined as . Kleene\u2019s Theorem states that the class of languages recognizable by finite automata coincides with the regular\nlanguages.\nA surjection \nis an automaton homomorphism from to if for every .\nIn such a case we say that \nis homomorphic to and denote it by . When \nis a bijection, and are said to be isomorphic.\nThe conceptual significance of Automaton Homomorphism is that, if we can simulate any and , we can \u2018almost\u2019 simulate as well, in the sense of following lemma:\nFor any two automata satisfying that for some function , for any , , , it holds that .\nWe claim for any , it holds that . This claim holds by definition of automaton homomorphism for all . suppose the claim already holds for all no longer than for some , for any with and , it holds that . Therefore . Thus we conclude that\n.\n\u220e\nA Semigroup is a pair where\n is a set and is a binary associative operation (\u201cmultiplication\u201d) from to . A\nMonoid is a semigroup admitting an identity element such that \nfor every . A group is a monoid such that for every there exists an element\n (an inverse) such that .\nA surjective function \nis a semigroup homomorphism from to if for every .\nIn such a case we say that is homomorphic to and denote it by . Two mutually homomorphic semigroups are said to be isomorphic.\nThe transformation semigroup of an automata is the semigroup generated by .\nBelow we give the definition of the cascade product of two automata, which is a central concept used in Krohn-Rhodes Decomposition Theorem for automata.\nLet and\n be two automata. The cascade product is the automaton where\nThe cascade product of more than two automata is defined as\n.\nA automaton is a permutation-reset automaton if for every letter , is either a\npermutation or reset. If the only\npermutations are identities, we call it a reset automaton.\nFor every automaton A there exists a cascade such that:\nEach is a permutation-reset automaton;\nThere is a homomorphism from to ;\nAny permutation group in some \nis homomorphic to a subgroup of the transformation semigroup of .\nThe pair is called a cascaded decomposition of .\nNext we introduce a key concept used in the proof of Theorem D.1 ###reference_theorem1### (and thus Theorem 3.1 ###reference_theorem1###) \u2013 Counter-free Automaton.\nAn automaton is counter-free if no word induces a permutation other than identity on any subset of .\nA subclass of the regular languages is the class of star-free sets defined as:\nThe class of star-free regular languages over is the\nsmallest class containing and the sets of the form where , which is\nclosed under finitely many applications of concatenation and Boolean operations including union, intersection, and complementation.\nIt is well-known that languages recognized by counter-free automata have the following equivalent characterizations.\nSuppose is a regular language not containing the empty string. Then the following are equivalent:\nis star-free;\nis accepted by a counter-free automata.\nis non-counting, i.e., there is an so that for all , , and and all , .\nCounter-free property of an automaton can also be characterized via its transformation semigroup by Lemma C.4 ###reference_theorem4###, whose proof is straightforward and skipped.\nAn automaton is counter-free if and only if the transformation semigroup of the automaton is group-free, i.e., it has no non-trivial subgroups. A semigroup is group-free if and only if it is aperiodic, i.e., for all , there exists , .\nThus Theorem C.5 ###reference_theorem5### holds as a corollary of Theorem C.2 ###reference_theorem2###.\nFor every counter-free automaton there exists a cascade such that each is a reset automaton and there is a homomorphism from to .\nUsing Theorem C.5 ###reference_theorem5### the following theorem connects the counter-free automata to constant-depth poly-size circuits with unbounded fan-in. The high-level proof idea is that any reset automaton can be simulated using constantly many depth and any counter-free automaton can be decomposed into the cascade product of a finite number of reset automaton.\n[Theorem 2.6, Chandra et al. (1983 ###reference_b5###)]\nSuppose is an counter-free automaton. Then there is a circuit of size with unbounded fan-in\nand constant depth that simulates for any and satisfying , where hides constants depending on the automaton."
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix 4",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix D Proofs for Expressiveness Upper Bounds (Section\u00a03.3)",
|
| 101 |
+
"text": "The main technical theorems we will prove in this section are Theorems D.1 ###reference_theorem1### and D.2 ###reference_theorem2###. Their proofs can be found in Sections D.1 ###reference_### and D.2 ###reference_### respectively.\nRecall is the binary representation of floating point with -bit exponent and -bit precision.\nFor any fixed , has circuits.\nIn detail, there is a family of circuits such that for all , it holds that\nFor , has circuits.\nIn detail, there is a family of circuits such that for all , it holds that\nWith Theorems D.1 ###reference_theorem1### and D.2 ###reference_theorem2### ready, Theorems 3.1 ###reference_theorem1### and 3.2 ###reference_theorem2### are standard (e.g., see proof of Theorem 4 in Liu et al. (2022a ###reference_b24###)) and thus are omitted.\nA total order on some set is a binary relationship satisfying that for all :\n(reflexive)\n, (transitive)\n, (antisymmetric)\nor . (total)\nWe say an automaton is ordered if and only if there exists a total order on and for all , preserves the order, that is,\nAll ordered automata are counter-free. Languages recognizable by any ordered automata belong to .\nTo show an ordered automaton is counter-free, it suffices to its transformation semigroup is group-free, or aperiodic. We first recall the definition of aperiodic semigroups Lemma C.4 ###reference_theorem4###. Let be the transformation induced by word . Transformation semigroup of is aperiodic iff for any , there exists , such that .\nNow We claim for any , there is , such that . Since is finite, this implies that there exists , such that and thus the transformation semigroup of is aperiodic. First, note that is ordered, we know is order-preserving for all . Let where , we have is also order-preserving and thus for all , . Then we proceed by three cases for each :\n. In this case, it suffices to take ;\n. Since is order-preserving, we know for any , . Since is finite, there must exist some such that .\n. Same as the case of .\nSince is a total order, at least one of the three cases happens. This concludes the proof.\nThe second claim follows directly from Theorem C.3 ###reference_theorem3###.\n\u220e\nFor any , iterated addition on floating point numbers with -bit exponent and -bit significand can be viewed .\nAutomaton is ordered, where for any .\nThe total order we use for as the state space of automaton coincides with the usual order on . Recall the rounding operation is defined as , which means rounding operation is order preserving, that is, for any , . Thus for any with , it holds that . Thus is ordered.\n\u220e\nThe following theorem Theorem D.1 ###reference_theorem1### is a direct consequence of Theorem D.4 ###reference_theorem4###.\nWe first claim that the following algorithm Algorithm 5 ###reference_### correctly computes over numbers in .\nAlgorithm 5 ###reference_### outputs for all and .\nNote that , , and , thus we conclude .\nWithout loss of generality, we can assume that . Therefore , and , which further implies that and . This ensures is always well-defined. For convenience we use to denote in the rest of this proof.\nNow we claim either or . By definition of , if neither of these two equalities happen, we have that , , and , which contradicts with the maximality of since . Without loss of generality, we assume and the analysis for the other case is almost the same. Now we claim that for all , no negative overflow happens at position , that is, .\nWe will prove this claim for two cases respectively depending on whether there exists some such that . The first case is such does not exist. Then neither positive or negative overflow happens through to , and thus\nIf such exists, we let to be the maximum of such . Then neither positive or negative overflow happens through to . Due to the optimality of , we know that for all , . Thus\nNow we claim . Because there is no negative overflow between and , we have that and the fist inequality is only strict when positive overflow happens at some . If there is no such , then and thus . Otherwise such exists and be the maximum of such . Then , where the last inequality is due to the optimality of . Thus in both cases we conclude that .\nFinally we will show there is neither negative or positive overflow from to and thus , which would justify the correctness of the algorithm. We have already shown there is no negative overflow. Suppose there is a positive overflow at some in the sense that and we let be the first positive overflow after . By definition of , there is neither positive and negative overflow between and and thus , which is contradictory to the assumption that there is a positive overflow at . This concludes the proof.\n\u220e\nIt suffices to show that Algorithm 5 ###reference_### can be implemented by a family of circuits since Lemma D.5 ###reference_theorem5### guarantees the correctness of Algorithm 5 ###reference_###. We can treat all the fixed-point floating numbers in the Algorithm 5 ###reference_### as integers with a suitable rescaling, which is . Since both sorting and adding binary integers with polynomial bits have circuits, each line in Algorithm 5 ###reference_### can be implemented by an circuits (for all indexes simultaneously if there is any).\n\u220e\n###figure_7###"
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"section_id": "Appendix 5",
|
| 105 |
+
"parent_section_id": null,
|
| 106 |
+
"section_name": "Appendix E Proofs for Expressiveness Lower Bounds (Section\u00a03.4)",
|
| 107 |
+
"text": "We first introduce some notations. Since in the construction of the lower bounds we are only using fixed-point numbers, we will use the shorthand and rounding operation . We use to denote all-one vectors of length . Similarly we define , , and . We recall that for any and integer , we use to denote the usual binary encoding of integer using binary bits in the sense that and to denote the signed binary encoding, which is .\nRecall .\nFor any , it holds that .\nBy the definition of rounding operation for -bit precision (Definition 3.2 ###reference_definition2###), it suffices to show that , that is, . Note that for all , we have .\n\u220e\nUsing the same argument above, we also have Lemma E.2 ###reference_theorem2###.\nFor any , it holds that .\nGiven two vectors of the same length , we use to denote their interleaving, that is, for all .\nFor any , let and for all , it holds that for all .\nIt suffices to prove that if and if . The rest is done by Lemma E.1 ###reference_theorem1###.\nGiven any , by definition of finite-precision inner product, we know that for any , it holds that where and for .\nFor all , we have that and . If , it is straightforward that and for all . If , then there exists such that . Thus , which implies regardless of the value of . Again use induction we can conclude that for all .\n\u220e\nFor any , by definition there is a family of boolean circuit which compute for all inputs of length using many and gates. Without loss of generality, let us assume the number of non-input gates in be . We will show that for each , there is a 2-layer decoder-only transformer that computes using steps of CoT for all . More precisely, we will construct a transformer that simulates one boolean gate in , following the topological order of the circuit, in each step of its chain of thought.\nWe first index the gates (including input gates) from to according to the topological order. For each gate , we use and to denote its two input gates. Since only has one input, we set as its input and as . We let if th gate is and if th gate is . For any input gate , , and the gate type are not meaningful and their choice will not affect the output and thus can be set arbitrarily. For convenience, we will set . We use to denote the output of non-input gate () on the circuit input , which is equal to .\nNow we describe the construction of the vocabulary , the token embedding , and position encoding . We set precision as any positive integer larger than , , since is a polynomial, , , , and\nWe use , , , , and to denote the intermediate embeddings at position and different depths. Here, depth and refer to the output of the Attention layer inside each transformer layer.\nFor the first attention layer, denoting embedding at the th position by , we set the query as , the key as , and the value as . 777Note here the dimension of and are the same but less than , which does not strictly satisfy our definition of transformer in Algorithm 3 ###reference_###. This is for notational convenience and is without loss of generality because we can pad extra zeros.\nFor the first fully-connected layer, we skip it by setting its weights to be .\nFor the second attention layer, denoting embedding at the th position by , we set the query as , the key as , and the value as .\nFor the second fully-connected layer, we define . Denote the embedding at position , by . The output of the second fully-connected layer is defined as . Note this expression is valid because it can be expressed by two-layer ReLU nets with constant bits of precision and a constant number of neurons.\nThe final output at position is .\nBelow we first describe the value of the internal variables of the transformer and then show there exist parameters making such computation realizable. Let be the input tokens, we define . We claim there exists transformers parameter such that (i.e. ). More specifically, we claim that our constructions will ensure the following inductively for all ,\n;\n;\n;\n;\n;\n.\nNow we explain why the above conditions hold for any position using induction, i.e., assuming it is true for all . We first notice that by our construction, for all , it holds that and for all . Note these are the only information that will be used in the later attention layers.\nThis is simply by the construction of and .\nIn the first attention layer, at the th position, we have as the query, as the key and as the value for all . Note here we reduce the sizes of hidden embeddings for simplicity of demonstration. This is valid because we can fill the extra coordinates by 0. This is valid because we can always set the extra coordinates to be . By Lemma E.3 ###reference_theorem3###, we know that for all . Recall that the attention scores are defined as , we know that .\nWe set the parameters in the first fully-connected feedforward layer to be all and let the skip connection pass the intermediate values.\nThe second attention layer attains and places it in the third coordinate in the same way as step 2.\nIn the fully-connected feedforward layer we compute for all . We can verify that , which is the desirrd output of the gate . This is because when , the output is and when , the output is .\nThe output layer uses the fourth coordinate of , which is according to induction, as the output.\nThis completes the proof of Theorem 3.3 ###reference_theorem3###.\n\u220e\nIn this subsection, we prove Theorems 3.7 ###reference_theorem7### and 3.8 ###reference_theorem8###. We first prove a useful lemma that gives an equivalent characterization of and .\nFor any satisfying , a decision problem belongs to (resp. ) if and only if there exist a polynomial , a function , and a depth such that for every there exist a sequence of sizes-, depth- circuits, , with unlimited-fanin , and gates (with additionally gates for ) and that for all ,\nWe will prove for only and the proof for is almost the same.\nThe \u201c\u201d direction is straightforward. By definition of (Definition 3.7 ###reference_definition7###), for any , there is a function and a family of circuits such that for every and , can be computed by a size- threshold circuits with oracle gate . Now we sort all the nodes in the circuits with oracle gates along the topological order as where are the inputs and is the number of the gates, then clearly is a function of for all and this function can be implemented by a different threshold circuit of constant depth and size for each . This completes the proof of \u201c\u201d direction.\nNow we prove the other direction \u201c\u201d. We first show that given sizes-, depth- circuits, , there is a depth-, size circuit , such that\nwhere is the one-hot vector with its th coordinate being . Indeed, it suffices to set\nOnce we have such oracle gate , given input , we can recursively define\nThus we can compute using oracle gate . We can get constant gate and by using and . respectively. This completes the proof.\n\u220e\nNow we are ready to prove Theorems 3.7 ###reference_theorem7### and 3.8 ###reference_theorem8###. We will prove Theorem 3.7 ###reference_theorem7### first and the proof of Theorem 3.8 ###reference_theorem8### is very similar to Theorem 3.7 ###reference_theorem7###.\nWe first show that . For the case that the vocabulary of transformer , by Theorem 3.2 ###reference_theorem2###, we know for any , can be expressed by a circuit whose depth is uniformly upper bounded by some constant for all . This completes the proof when . When , we can use the binary encoding of elements in as the inputs of those gates constructed for the later layers of the transformer.\nNow we turn to the proof for the other direction: . In high-level speaking, the proof contains two steps:\nWe show that . The first step has two key constructions: (a). using attention to copy all the weights to the same position; (b). we can use polysize two-layer FC net with ReLU activation to simulate gate with unbounded fan-in (Lemma E.5 ###reference_theorem5###);\nWe can do the first step for all positions simultaneously.\nBy the equivalence construction shown in the Lemma E.4 ###reference_theorem4###, we know that for any problem , there exist constant , polynomial , and , and a sequence of threshold circuits, , each of size (number of non-input gates) and depth of , and that for all ,\nNow we present the construction of the constant-depth, constant-precision decoder-only transformer, which computes problem when input length is . Without loss of generality, we only consider the case where . We set vocabulary , embedding width , depth equal to , CoT length and precision so the precision is high enough for simulating all the size gates used in (Lemma E.5 ###reference_theorem5###). We set , , and for all , where we use to denote the one-hot vector whose th coordinate is for and to denote one-hot vector whose th coordinate is for .\nBelow we first describe the value the internal variables of the transformer and then show there exist parameters making such computation realizable. To make our claims more interpretable, we only write the non-zero part of the embedding and omit the remaining \u2019s. the Let be the input tokens and , our constructions will ensure that\n, .\n;\n;\n;\n\n\nstores the intermediate result of circuit at layer , and ;\n, for all .\nNow we explain the purpose of each layer and how to set the parameters such that the requirements above are met.\n, is the goal of the construction;\nThis is by our construction of and ;\nThe first attention layer does nothing by setting all weights to ;\nBy Lemma E.5 ###reference_theorem5###, can be simulated by 2-layer ReLU networks using hidden neurons. Thus we use the first feedforward-layer to compute the function for all with totally hidden neurons. Therefore if , then , which implies ; if , then , thus .\nThis step exactly requires . It suffices to set the attention score of the second layer at th position for all . This can be done by setting . By Lemma E.2 ###reference_theorem2###, we have . Since rounded sum of any number of is still and , we know that\nfor all . Note in this step we use our specific rounding rule to copy all the previous with a sum of attention score larger than . We can just also use approximately uniform attention scores with an additional coefficient before since we have precision.\nFinally we set and .\nThe second MLP layer just does permutation and adds some constants into fixed coordinates. The construction is straightforward and thus omitted.\nThe second attention layer is the only attention layer which has non-zero weights. Using the feedforward ReLU networks from layer to , we can simulate the circuits in parallel for all by Lemma E.5 ###reference_theorem5###. In detail, Lemma E.5 ###reference_theorem5### ensures that we can use a two-layer fully-connected ReLU network with weights to simulate a layer of the circuits . Moreover, there is enough space in the embedding to reserve \u2019s needed by Lemma E.5 ###reference_theorem5###.\nThis step holds directly due to the property guaranteed in step . We note that with the property claimed in step 9, we have that . Thus if , then , which implies , otherwise if , then . In both cases, we have that\nSo far we have finished the proof for the general . Specifically, when , our proof shows that the constant-depth transformer can still simulate any constant-depth circuit, which means . Thus all the inclusions are equivalence, that is .\n\u220e\nWe first show that . For the case that the vocabulary of transformer , by Theorem 3.1 ###reference_theorem1###, we know for any , can be expressed by a circuit whose depth is uniformly upper bounded by some constant for all . This completes the proof when . When , we can use the binary encoding of elements in as the inputs of those gates constructed for the later layers of the transformer.\nThe other direction is almost the same as that of Theorem 3.7 ###reference_theorem7###, except that we now only need constant bits of precision because we do not need to simulate gates (Lemma E.5 ###reference_theorem5###).\n\u220e\nBy Lemma E.7 ###reference_theorem7###, it holds that for all , . By Theorem 3.7 ###reference_theorem7###, we know that for any . Thus for all . Also, note that the attention layer and fully-connected layer can be computed using poly-size circuits. Thus for any , for some integer . Combining these we conclude that for any , .\n\u220e\nIn this subsection, we prove a few auxiliary lemmas that are used in the proofs in Section 3.4 ###reference_###.\nUnlimited-fanin (resp. can be simulated by some 2-layer feedforward ReLU network with constant (resp. ) bits of precision constant hidden dimension and additional constant inputs of value 1.\nMathematically, let be the set of functions which can be a two-layer feedforward ReLU network with at most bits of precision and constant hidden dimension , where , such that for any ,\nWe have unlimited-fanin and .\nThe proof of Lemma E.5 ###reference_theorem5### is based on the following straightforward lemma (Lemma E.6 ###reference_theorem6###).\nFor any and , .\nIn particular, for any , .\nRecall that denotes for any . We have that for all and . Moreover, . Similarly, we have that and . In other words, we have\n;\n.\nTherefore for , we can set with , and we have that\nSimilarly for , we can set with , and we have that\nThe proofs for and are thus completed.\nNext we deal with . Note that for , we have that for all .\nwhere with .\n\u220e\nFor all , .\nWe first define as the problems solvable by circuits with standard gates exactly. Thus . Now we claim that for each , there is a , such that for all , it holds that there is a conjunction normal form (CNF) with at most clauses over that cannot be expressed by any circuit of size . This claim holds because of a simple counting argument. There are at least different such CNFs. On the other hand, it is well known that one can represent a -size circuit only allowing standard gates with bits (we need bits to encode the id of a gate). Thus the total number of different circuits of size at most is at most , which is smaller than for sufficiently large . We denote such for each by . Now we define the following language : if the input length of is for some , use the -clause CNF\u2019s output which cannot be expressed by size- circuits as the output; otherwise rejects (output ). Then clearly for all , thus . By construction, . This completes the proof.\n\u220e"
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"section_id": "Appendix 6",
|
| 111 |
+
"parent_section_id": null,
|
| 112 |
+
"section_name": "Appendix F Discussion on Variants in Transformer Architecture",
|
| 113 |
+
"text": "Allowing LayerNorm changes the function class that a transformer can express and the position of the layer norm also matters (Xiong et al., 2020 ###reference_b51###). However, the expressiveness results mentioned in this work still hold for the two most popular transformer architecture variants with LayerNorm \u2014 Post LayerNorm and Pre LayerNorm. The upper bounds on transformer expressiveness Theorems 3.1 ###reference_theorem1### and 3.2 ###reference_theorem2### clearly don\u2019t get affected by adding LayerNorm, which can be computed in polynomial time for each token.\nBelow we focus on the upper bound of the expressiveness of decoder-only transformers with or without CoT. In detail, we will explain why Theorems 3.3 ###reference_theorem3### and 3.7 ###reference_theorem7### still holds even with LayerNorm. Here the key observation is that, if each coordinate of ranges from and appear in pairs, then . Thus it suffices to show that we can slightly twist the construction of transformers in Theorems 3.3 ###reference_theorem3### and 3.7 ###reference_theorem7### that for all , is composed of and and they appear in pairs so the sum is always . Note that in the current construction, each only contains . It suffices to replace each dimension with four dimensions, in the sense , and . This can be done by changing the weights of the token embedding, position encoding, and the weights of the second layer of each fully-connected layer. For the outgoing layer, we just use the average of the new representation, which is exactly the same as the original value in all three cases.\nIn this paper, for simplicity, we only focus on the case where there is only one attention head in each layer. The main results in this paper still apply if we allow constantly many attention heads, because we can simulate an attention layer with heads with attention layers with one head. Allowing an arbitrary number of attention heads while fixing total embedding size might make the constant-depth transformers strictly more expressive in certain settings and we leave it for future works."
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"section_id": "Appendix 7",
|
| 117 |
+
"parent_section_id": null,
|
| 118 |
+
"section_name": "Appendix G Discussion on Non-uniformity",
|
| 119 |
+
"text": "Non-uniform computation models allow a different program for each different input length, like boolean circuits. However, the complexity class defined by circuits can also be uniform, if we add additional assumption on the correlation between circuits of different input lengths, e.g., one can require the circuits for input length can be generated by a Turing Machine taken as input in using a certain amount of time and space.\nThe complexity class introduced in this paper can also be made uniform by enforcing an additional assumption, that the parameters of the transformer can be generalized by some Turing Machine given the input sequence length . It is well-known that one can simulate the execution of the Turing Machine for any steps by a family of uniform boolean circuits of size . Thus if we enforce the parameters of transformers in to be uniform, our main theorem would imply that constant-depth transformers with uniform parameters and polynomially many steps of chain of thoughts can solve all problems in . Also note that the inference of transformers can also be done in polynomial time, we conclude it is exactly equal to .\nOne natural question about non-uniformity is that whether having a different transformer for each input sequence length is practical, given that a significant portion of previous theoretical works on transformer expressiveness focuses on the uniform setting. This problem is kind of ill-defined because we haven\u2019t been able to scale up the input length to arbitrary length in practice, and thus it is not clear if it is necessary to keep scaling up the size of LLMs for longer input sequence length. But at least for the LLMs that have been seen in practice, it seems quite common to scale up the model size when dealing with longer input sequence length. Also taking the GPT architecture (Radford et al., 2019 ###reference_b38###) that we focus on in this paper, having more trainable parameters is necessary for longer input sequence length, due to the trainable absolute position encoding.\nStill, one needs to note that there is a difference between natural language tasks and complexity class, where the former has a lot of memorization and does not require a strong ability to solve math problems of any sequence length. In contrast, to learn this complexity class like the composition of permutation of any length, transformers need to have the ability of length generalization, which does seem impossible for certain non-uniform models, e.g., like GPT architectures with trainable absolute position encoding, because there is no way to learn the position encoding at an unseen position in the training dataset. Of course, length generalization would still be possible if GPT architecture learned the ground truth without using the trainable position encoding at all."
|
| 120 |
+
}
|
| 121 |
+
],
|
| 122 |
+
"tables": {},
|
| 123 |
+
"image_paths": {
|
| 124 |
+
"1": {
|
| 125 |
+
"figure_path": "2402.12875v4_figure_1.png",
|
| 126 |
+
"caption": "Figure 1: Relationship diagram between cotcomplexity class with different embedding sizes d\u2062(n)\ud835\udc51\ud835\udc5bd(n)italic_d ( italic_n ) and CoT lengths T\u2062(n)\ud835\udc47\ud835\udc5bT(n)italic_T ( italic_n ). We fix the precision to be constant (the above diagram holds with or without constantly many exponent bits) and omit them in the notation for simplicity. The diagram for log precision is similar (with \\AC0superscript\\AC0\\AC^{0}start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT replaced by \\TC0superscript\\TC0\\TC^{0}start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT), and is thus deferred to the appendix, Figure 10.",
|
| 127 |
+
"url": "http://arxiv.org/html/2402.12875v4/x1.png"
|
| 128 |
+
},
|
| 129 |
+
"2(a)": {
|
| 130 |
+
"figure_path": "2402.12875v4_figure_2(a).png",
|
| 131 |
+
"caption": "(a) Original Circuit\nFigure 2: Illustration of Theorem 3.3 on a 2-gate and 2-input circuit.",
|
| 132 |
+
"url": "http://arxiv.org/html/2402.12875v4/x2.png"
|
| 133 |
+
},
|
| 134 |
+
"2(b)": {
|
| 135 |
+
"figure_path": "2402.12875v4_figure_2(b).png",
|
| 136 |
+
"caption": "(b) Forward pass of the transformer with CoT at position 3, computing x4subscript\ud835\udc654x_{4}italic_x start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT in Figure 2(a). The position embedding comes from the third row of the right serialization in Figure 2(c).\nFigure 2: Illustration of Theorem 3.3 on a 2-gate and 2-input circuit.",
|
| 137 |
+
"url": "http://arxiv.org/html/2402.12875v4/x3.png"
|
| 138 |
+
},
|
| 139 |
+
"2(c)": {
|
| 140 |
+
"figure_path": "2402.12875v4_figure_2(c).png",
|
| 141 |
+
"caption": "(c) Two ways to serialize circuits. The left (blue) one is the most natural one and the right (green) one is used to construct the position embedding so the transformer with CoT simulates the original circuit in Figure 2(a).\nFigure 2: Illustration of Theorem 3.3 on a 2-gate and 2-input circuit.",
|
| 142 |
+
"url": "http://arxiv.org/html/2402.12875v4/x4.png"
|
| 143 |
+
},
|
| 144 |
+
"3": {
|
| 145 |
+
"figure_path": "2402.12875v4_figure_3.png",
|
| 146 |
+
"caption": "Figure 3: Permutation Composition (S5subscript\ud835\udc465S_{5}italic_S start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT). The label is the composition of all the permutations, where given two permutation \u03c3=(\u03c31,\u2026,\u03c35)\ud835\udf0esubscript\ud835\udf0e1\u2026subscript\ud835\udf0e5\\sigma=(\\sigma_{1},\\ldots,\\sigma_{5})italic_\u03c3 = ( italic_\u03c3 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , \u2026 , italic_\u03c3 start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT ), \u03c0=(\u03c01,\u2026,\u03c05)\ud835\udf0bsubscript\ud835\udf0b1\u2026subscript\ud835\udf0b5\\pi=(\\pi_{1},\\ldots,\\pi_{5})italic_\u03c0 = ( italic_\u03c0 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , \u2026 , italic_\u03c0 start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT ), we define \u03c3\u2218\u03c0\u225c(\u03c3\u03c01,\u2026,\u03c3\u03c05)\u225c\ud835\udf0e\ud835\udf0bsubscript\ud835\udf0esubscript\ud835\udf0b1\u2026subscript\ud835\udf0esubscript\ud835\udf0b5\\sigma\\circ\\pi\\triangleq(\\sigma_{\\pi_{1}},\\ldots,\\sigma_{\\pi_{5}})italic_\u03c3 \u2218 italic_\u03c0 \u225c ( italic_\u03c3 start_POSTSUBSCRIPT italic_\u03c0 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , \u2026 , italic_\u03c3 start_POSTSUBSCRIPT italic_\u03c0 start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ). The chain of thoughts and hints are the partial compositions (string A).\nOnly CoT can solve this task well, as predicted by our Theorem 3.5. Note for the most time the accuracy without CoT is \u223c20%similar-toabsentpercent20\\sim 20\\%\u223c 20 %, which is no better than randomly guessing a number between 1111 and 5555.",
|
| 147 |
+
"url": "http://arxiv.org/html/2402.12875v4/x5.png"
|
| 148 |
+
},
|
| 149 |
+
"4": {
|
| 150 |
+
"figure_path": "2402.12875v4_figure_4.png",
|
| 151 |
+
"caption": "Figure 4: Modular Addition(C7subscript\ud835\udc367C_{7}italic_C start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT). The label is the sum of the inputs modulo a positive integer, which is 7777 in this case. The chain of thoughts and hints are the partial modular sum. Low-depth transformers with hint can solve this task well for a reasonable input sequence length, but with cot the performance is much better, especially with a long input sequence, as predicted by our Theorem 3.3. See experiments for C2subscript\ud835\udc362C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT in Figure 7.",
|
| 152 |
+
"url": "http://arxiv.org/html/2402.12875v4/x6.png"
|
| 153 |
+
},
|
| 154 |
+
"5": {
|
| 155 |
+
"figure_path": "2402.12875v4_figure_5.png",
|
| 156 |
+
"caption": "Figure 5: Iterated Squaring(IS). The vocabulary \ud835\udcb1\u225c{0,1,\u2026,T\u22121,=,^ 2}\u225c\ud835\udcb101\u2026\ud835\udc471^ 2{\\mathcal{V}}\\triangleq\\{0,1,\\ldots,T-1,=,\\text{\\^{} 2}\\}caligraphic_V \u225c { 0 , 1 , \u2026 , italic_T - 1 , = , ^ 2 } with T=1000\ud835\udc471000T=1000italic_T = 1000. We randomly generate input of format (p,r,^ 2,\u2026,^ 2,=)\ud835\udc5d\ud835\udc5f^ 2\u2026^ 2(p,r,{\\text{\\^{} 2},\\ldots,\\text{\\^{} 2}},=)( italic_p , italic_r , ^ 2 , \u2026 , ^ 2 , = ) with 1\u2264r,p\u2264T\u22121formulae-sequence1\ud835\udc5f\ud835\udc5d\ud835\udc4711\\leq r,p\\leq T-11 \u2264 italic_r , italic_p \u2264 italic_T - 1, p\ud835\udc5dpitalic_p being a prime and random number of ^2 tokens (at most m\ud835\udc5amitalic_m).\nThe label is fr,p\u2062(n)\u2261(r2n)modpsubscript\ud835\udc53\ud835\udc5f\ud835\udc5d\ud835\udc5bmodulosuperscript\ud835\udc5fsuperscript2\ud835\udc5b\ud835\udc5df_{r,p}(n)\\equiv(r^{2^{n}})\\mod pitalic_f start_POSTSUBSCRIPT italic_r , italic_p end_POSTSUBSCRIPT ( italic_n ) \u2261 ( italic_r start_POSTSUPERSCRIPT 2 start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT ) roman_mod italic_p. CoT and hints are (fr,p\u2062(i))i=1nsuperscriptsubscriptsubscript\ud835\udc53\ud835\udc5f\ud835\udc5d\ud835\udc56\ud835\udc561\ud835\udc5b(f_{r,p}(i))_{i=1}^{n}( italic_f start_POSTSUBSCRIPT italic_r , italic_p end_POSTSUBSCRIPT ( italic_i ) ) start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. Though our construction does not exactly satisfy the technical conditions of the hardness assumption, this problem is difficult for transformers without CoT to learn, but can be perfectly expressed with CoT even with depth 1.",
|
| 157 |
+
"url": "http://arxiv.org/html/2402.12875v4/x7.png"
|
| 158 |
+
},
|
| 159 |
+
"6": {
|
| 160 |
+
"figure_path": "2402.12875v4_figure_6.png",
|
| 161 |
+
"caption": "Figure 6: Circuit Value Problem(CVP). Given a randomly generated circuit with m\ud835\udc5amitalic_m gates (sorted by topological order), the vocabulary \ud835\udcb1=[m]\u222a{\ud835\uddb3\ud835\uddb1\ud835\uddb4\ud835\udda4,\ud835\udda5\ud835\udda0\ud835\uddab\ud835\uddb2\ud835\udda4,\ud835\udda0\ud835\uddad\ud835\udda3,\ud835\uddae\ud835\uddb1,\ud835\uddad\ud835\uddae\ud835\uddb3,\ud835\uddad\ud835\udda0,=}\ud835\udcb1delimited-[]\ud835\udc5a\ud835\uddb3\ud835\uddb1\ud835\uddb4\ud835\udda4\ud835\udda5\ud835\udda0\ud835\uddab\ud835\uddb2\ud835\udda4\ud835\udda0\ud835\uddad\ud835\udda3\ud835\uddae\ud835\uddb1\ud835\uddad\ud835\uddae\ud835\uddb3\ud835\uddad\ud835\udda0{\\mathcal{V}}=[m]\\cup\\{\\mathsf{TRUE},\\mathsf{FALSE},\\mathsf{AND},\\mathsf{OR},%\n\\mathsf{NOT},\\mathsf{NA},=\\}caligraphic_V = [ italic_m ] \u222a { sansserif_TRUE , sansserif_FALSE , sansserif_AND , sansserif_OR , sansserif_NOT , sansserif_NA , = }. Each gate is represented by four consecutive tokens, which are gate type, two input gate ids, and the current gate id. The output is the value of the last gate m\ud835\udc5amitalic_m. CoT and hints also contain 4 tokens for each gate, which are gate type, two input gate values, and the current gate value.",
|
| 162 |
+
"url": "http://arxiv.org/html/2402.12875v4/x8.png"
|
| 163 |
+
},
|
| 164 |
+
"7": {
|
| 165 |
+
"figure_path": "2402.12875v4_figure_7.png",
|
| 166 |
+
"caption": "Figure 7: Results of Modular Addition C2subscript\ud835\udc362C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT.",
|
| 167 |
+
"url": "http://arxiv.org/html/2402.12875v4/x9.png"
|
| 168 |
+
},
|
| 169 |
+
"8(a)": {
|
| 170 |
+
"figure_path": "2402.12875v4_figure_8(a).png",
|
| 171 |
+
"caption": "Figure 8: Results of base on Permutation Composition, Iterated Squaring, and Circuit Value Problem.",
|
| 172 |
+
"url": "http://arxiv.org/html/2402.12875v4/x10.png"
|
| 173 |
+
},
|
| 174 |
+
"8(b)": {
|
| 175 |
+
"figure_path": "2402.12875v4_figure_8(b).png",
|
| 176 |
+
"caption": "Figure 8: Results of base on Permutation Composition, Iterated Squaring, and Circuit Value Problem.",
|
| 177 |
+
"url": "http://arxiv.org/html/2402.12875v4/x11.png"
|
| 178 |
+
},
|
| 179 |
+
"8(c)": {
|
| 180 |
+
"figure_path": "2402.12875v4_figure_8(c).png",
|
| 181 |
+
"caption": "Figure 8: Results of base on Permutation Composition, Iterated Squaring, and Circuit Value Problem.",
|
| 182 |
+
"url": "http://arxiv.org/html/2402.12875v4/x12.png"
|
| 183 |
+
},
|
| 184 |
+
"9(a)": {
|
| 185 |
+
"figure_path": "2402.12875v4_figure_9(a).png",
|
| 186 |
+
"caption": "Figure 9: Results of Modular Addition base on C2subscript\ud835\udc362C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and C7subscript\ud835\udc367C_{7}italic_C start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT.",
|
| 187 |
+
"url": "http://arxiv.org/html/2402.12875v4/x13.png"
|
| 188 |
+
},
|
| 189 |
+
"9(b)": {
|
| 190 |
+
"figure_path": "2402.12875v4_figure_9(b).png",
|
| 191 |
+
"caption": "Figure 9: Results of Modular Addition base on C2subscript\ud835\udc362C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and C7subscript\ud835\udc367C_{7}italic_C start_POSTSUBSCRIPT 7 end_POSTSUBSCRIPT.",
|
| 192 |
+
"url": "http://arxiv.org/html/2402.12875v4/x14.png"
|
| 193 |
+
},
|
| 194 |
+
"10": {
|
| 195 |
+
"figure_path": "2402.12875v4_figure_10.png",
|
| 196 |
+
"caption": "Figure 10: Relationship diagram between cotcomplexity class with different embedding sizes d\u2062(n)\ud835\udc51\ud835\udc5bd(n)italic_d ( italic_n ) and CoT lengths T\u2062(n)\ud835\udc47\ud835\udc5bT(n)italic_T ( italic_n ). We fix the precision to be log\u2061(n)\ud835\udc5b\\log(n)roman_log ( italic_n ) and the number of exponents bit as 00. This is the counterpart of its finite precision version Figure 1",
|
| 197 |
+
"url": "http://arxiv.org/html/2402.12875v4/x15.png"
|
| 198 |
+
}
|
| 199 |
+
},
|
| 200 |
+
"validation": true,
|
| 201 |
+
"references": [
|
| 202 |
+
{
|
| 203 |
+
"1": {
|
| 204 |
+
"title": "Gpt-4 technical report.",
|
| 205 |
+
"author": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.",
|
| 206 |
+
"venue": "arXiv preprint arXiv:2303.08774, 2023.",
|
| 207 |
+
"url": null
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"2": {
|
| 212 |
+
"title": "Palm 2 technical report.",
|
| 213 |
+
"author": "Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al.",
|
| 214 |
+
"venue": "arXiv preprint arXiv:2305.10403, 2023.",
|
| 215 |
+
"url": null
|
| 216 |
+
}
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"3": {
|
| 220 |
+
"title": "Layer normalization.",
|
| 221 |
+
"author": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton.",
|
| 222 |
+
"venue": "arXiv preprint arXiv:1607.06450, 2016.",
|
| 223 |
+
"url": null
|
| 224 |
+
}
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"4": {
|
| 228 |
+
"title": "Bounded-width polynomial-size branching programs recognize exactly those languages in nc.",
|
| 229 |
+
"author": "David A. Barrington.",
|
| 230 |
+
"venue": "pp. 1\u20135, 1986.",
|
| 231 |
+
"url": null
|
| 232 |
+
}
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"5": {
|
| 236 |
+
"title": "Unbounded fan-in circuits and associative functions.",
|
| 237 |
+
"author": "Ashok K Chandra, Steven Fortune, and Richard Lipton.",
|
| 238 |
+
"venue": "In Proceedings of the fifteenth annual ACM symposium on Theory of computing, pp. 52\u201360, 1983.",
|
| 239 |
+
"url": null
|
| 240 |
+
}
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"6": {
|
| 244 |
+
"title": "Tighter bounds on the expressivity of transformer encoders.",
|
| 245 |
+
"author": "David Chiang, Peter Cholak, and Anand Pillay.",
|
| 246 |
+
"venue": "arXiv preprint arXiv:2301.10743, 2023.",
|
| 247 |
+
"url": null
|
| 248 |
+
}
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"7": {
|
| 252 |
+
"title": "Palm: Scaling language modeling with pathways.",
|
| 253 |
+
"author": "Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel.",
|
| 254 |
+
"venue": "Journal of Machine Learning Research, 24(240):1\u2013113, 2023.",
|
| 255 |
+
"url": null
|
| 256 |
+
}
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"8": {
|
| 260 |
+
"title": "Scaling instruction-finetuned language models.",
|
| 261 |
+
"author": "Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al.",
|
| 262 |
+
"venue": "arXiv preprint arXiv:2210.11416, 2022.",
|
| 263 |
+
"url": null
|
| 264 |
+
}
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"9": {
|
| 268 |
+
"title": "What does bert look at? an analysis of bert\u2019s attention.",
|
| 269 |
+
"author": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning.",
|
| 270 |
+
"venue": "arXiv preprint arXiv:1906.04341, 2019.",
|
| 271 |
+
"url": null
|
| 272 |
+
}
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"10": {
|
| 276 |
+
"title": "Training verifiers to solve math word problems.",
|
| 277 |
+
"author": "Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al.",
|
| 278 |
+
"venue": "arXiv preprint arXiv:2110.14168, 2021.",
|
| 279 |
+
"url": null
|
| 280 |
+
}
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"11": {
|
| 284 |
+
"title": "Universal transformers.",
|
| 285 |
+
"author": "Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and \u0141ukasz Kaiser.",
|
| 286 |
+
"venue": "arXiv preprint arXiv:1807.03819, 2018.",
|
| 287 |
+
"url": null
|
| 288 |
+
}
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"12": {
|
| 292 |
+
"title": "Inductive biases and variable creation in self-attention mechanisms.",
|
| 293 |
+
"author": "Benjamin L Edelman, Surbhi Goel, Sham Kakade, and Cyril Zhang.",
|
| 294 |
+
"venue": "In International Conference on Machine Learning, pp. 5793\u20135831. PMLR, 2022.",
|
| 295 |
+
"url": null
|
| 296 |
+
}
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"13": {
|
| 300 |
+
"title": "Towards revealing the mystery behind chain of thought: a theoretical perspective.",
|
| 301 |
+
"author": "Guhao Feng, Yuntian Gu, Bohang Zhang, Haotian Ye, Di He, and Liwei Wang.",
|
| 302 |
+
"venue": "arXiv preprint arXiv:2305.15408, 2023.",
|
| 303 |
+
"url": null
|
| 304 |
+
}
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"14": {
|
| 308 |
+
"title": "Looped transformers as programmable computers.",
|
| 309 |
+
"author": "Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D Lee, and Dimitris Papailiopoulos.",
|
| 310 |
+
"venue": "arXiv preprint arXiv:2301.13196, 2023.",
|
| 311 |
+
"url": null
|
| 312 |
+
}
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"15": {
|
| 316 |
+
"title": "What every computer scientist should know about floating-point arithmetic.",
|
| 317 |
+
"author": "David Goldberg.",
|
| 318 |
+
"venue": "ACM computing surveys (CSUR), 23(1):5\u201348, 1991.",
|
| 319 |
+
"url": null
|
| 320 |
+
}
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"16": {
|
| 324 |
+
"title": "Theoretical limitations of self-attention in neural sequence models.",
|
| 325 |
+
"author": "Michael Hahn.",
|
| 326 |
+
"venue": "Transactions of the Association for Computational Linguistics, 8:156\u2013171, 2020.",
|
| 327 |
+
"url": null
|
| 328 |
+
}
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"17": {
|
| 332 |
+
"title": "Formal language recognition by hard attention transformers: Perspectives from circuit complexity.",
|
| 333 |
+
"author": "Yiding Hao, Dana Angluin, and Robert Frank.",
|
| 334 |
+
"venue": "Transactions of the Association for Computational Linguistics, 10:800\u2013810, 2022.",
|
| 335 |
+
"url": null
|
| 336 |
+
}
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"18": {
|
| 340 |
+
"title": "Division is in uniform tc0.",
|
| 341 |
+
"author": "William Hesse.",
|
| 342 |
+
"venue": "In International Colloquium on Automata, Languages, and Programming, pp. 104\u2013114. Springer, 2001.",
|
| 343 |
+
"url": null
|
| 344 |
+
}
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"19": {
|
| 348 |
+
"title": "Ieee standard for floating-point arithmetic.",
|
| 349 |
+
"author": "IEEE.",
|
| 350 |
+
"venue": "IEEE Std 754-2008, pp. 1\u201370, 2008.",
|
| 351 |
+
"url": null
|
| 352 |
+
}
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"20": {
|
| 356 |
+
"title": "Adam: A method for stochastic optimization.",
|
| 357 |
+
"author": "Diederik P Kingma and Jimmy Ba.",
|
| 358 |
+
"venue": "arXiv preprint arXiv:1412.6980, 2014.",
|
| 359 |
+
"url": null
|
| 360 |
+
}
|
| 361 |
+
},
|
| 362 |
+
{
|
| 363 |
+
"21": {
|
| 364 |
+
"title": "Large language models are zero-shot reasoners.",
|
| 365 |
+
"author": "Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa.",
|
| 366 |
+
"venue": "Advances in Neural Information Processing Systems, 2022.",
|
| 367 |
+
"url": null
|
| 368 |
+
}
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"22": {
|
| 372 |
+
"title": "Algebraic theory of machines. i. prime decomposition theorem for finite semigroups and machines.",
|
| 373 |
+
"author": "Kenneth Krohn and John Rhodes.",
|
| 374 |
+
"venue": "Transactions of the American Mathematical Society, 116:450\u2013464, 1965.",
|
| 375 |
+
"url": null
|
| 376 |
+
}
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"23": {
|
| 380 |
+
"title": "Program induction by rationale generation: Learning to solve and explain algebraic word problems.",
|
| 381 |
+
"author": "Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom.",
|
| 382 |
+
"venue": "arXiv preprint arXiv:1705.04146, 2017.",
|
| 383 |
+
"url": null
|
| 384 |
+
}
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"24": {
|
| 388 |
+
"title": "Transformers learn shortcuts to automata.",
|
| 389 |
+
"author": "Bingbin Liu, Jordan T Ash, Surbhi Goel, Akshay Krishnamurthy, and Cyril Zhang.",
|
| 390 |
+
"venue": "arXiv preprint arXiv:2210.10749, 2022a.",
|
| 391 |
+
"url": null
|
| 392 |
+
}
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"25": {
|
| 396 |
+
"title": "Towards efficient and scalable sharpness-aware minimization.",
|
| 397 |
+
"author": "Yong Liu, Siqi Mai, Xiangning Chen, Cho-Jui Hsieh, and Yang You.",
|
| 398 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12360\u201312370, 2022b.",
|
| 399 |
+
"url": null
|
| 400 |
+
}
|
| 401 |
+
},
|
| 402 |
+
{
|
| 403 |
+
"26": {
|
| 404 |
+
"title": "Fiat-shamir for repeated squaring with applications to ppad-hardness and vdfs.",
|
| 405 |
+
"author": "Alex Lombardi and Vinod Vaikuntanathan.",
|
| 406 |
+
"venue": "In Advances in Cryptology\u2013CRYPTO 2020: 40th Annual International Cryptology Conference, CRYPTO 2020, Santa Barbara, CA, USA, August 17\u201321, 2020, Proceedings, Part III, pp. 632\u2013651. Springer, 2020.",
|
| 407 |
+
"url": null
|
| 408 |
+
}
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"27": {
|
| 412 |
+
"title": "Text and patterns: For effective chain of thought, it takes two to tango.",
|
| 413 |
+
"author": "Aman Madaan and Amir Yazdanbakhsh.",
|
| 414 |
+
"venue": "arXiv preprint arXiv:2209.07686, 2022.",
|
| 415 |
+
"url": null
|
| 416 |
+
}
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"28": {
|
| 420 |
+
"title": "On the krohn-rhodes cascaded decomposition theorem.",
|
| 421 |
+
"author": "Oded Maler.",
|
| 422 |
+
"venue": "In Time for Verification: Essays in Memory of Amir Pnueli, pp. 260\u2013278. Springer, 2010.",
|
| 423 |
+
"url": null
|
| 424 |
+
}
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"29": {
|
| 428 |
+
"title": "Counter-Free Automata (MIT research monograph no. 65).",
|
| 429 |
+
"author": "Robert McNaughton and Seymour A Papert.",
|
| 430 |
+
"venue": "The MIT Press, 1971.",
|
| 431 |
+
"url": null
|
| 432 |
+
}
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"30": {
|
| 436 |
+
"title": "A logic for expressing log-precision transformers.",
|
| 437 |
+
"author": "William Merrill and Ashish Sabharwal.",
|
| 438 |
+
"venue": "In Thirty-seventh Conference on Neural Information Processing Systems, 2023a.",
|
| 439 |
+
"url": null
|
| 440 |
+
}
|
| 441 |
+
},
|
| 442 |
+
{
|
| 443 |
+
"31": {
|
| 444 |
+
"title": "The parallelism tradeoff: Limitations of log-precision transformers.",
|
| 445 |
+
"author": "William Merrill and Ashish Sabharwal.",
|
| 446 |
+
"venue": "Transactions of the Association for Computational Linguistics, 11:531\u2013545, 2023b.",
|
| 447 |
+
"url": null
|
| 448 |
+
}
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"32": {
|
| 452 |
+
"title": "On the power of saturated transformers: A view from circuit complexity.",
|
| 453 |
+
"author": "William Merrill, Yoav Goldberg, and Noah A Smith.",
|
| 454 |
+
"venue": "arXiv preprint arXiv:2106.16213, 2021.",
|
| 455 |
+
"url": null
|
| 456 |
+
}
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"33": {
|
| 460 |
+
"title": "Saturated transformers are constant-depth threshold circuits.",
|
| 461 |
+
"author": "William Merrill, Ashish Sabharwal, and Noah A Smith.",
|
| 462 |
+
"venue": "Transactions of the Association for Computational Linguistics, 10:843\u2013856, 2022.",
|
| 463 |
+
"url": null
|
| 464 |
+
}
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"34": {
|
| 468 |
+
"title": "Show your work: Scratchpads for intermediate computation with language models.",
|
| 469 |
+
"author": "Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al.",
|
| 470 |
+
"venue": "arXiv preprint arXiv:2112.00114, 2021.",
|
| 471 |
+
"url": null
|
| 472 |
+
}
|
| 473 |
+
},
|
| 474 |
+
{
|
| 475 |
+
"35": {
|
| 476 |
+
"title": "On the turing completeness of modern neural network architectures.",
|
| 477 |
+
"author": "Jorge P\u00e9rez, Javier Marinkovi\u0107, and Pablo Barcel\u00f3.",
|
| 478 |
+
"venue": "arXiv preprint arXiv:1901.03429, 2019.",
|
| 479 |
+
"url": null
|
| 480 |
+
}
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"36": {
|
| 484 |
+
"title": "Attention is turing complete.",
|
| 485 |
+
"author": "Jorge P\u00e9rez, Pablo Barcel\u00f3, and Javier Marinkovic.",
|
| 486 |
+
"venue": "The Journal of Machine Learning Research, 22(1):3463\u20133497, 2021.",
|
| 487 |
+
"url": null
|
| 488 |
+
}
|
| 489 |
+
},
|
| 490 |
+
{
|
| 491 |
+
"37": {
|
| 492 |
+
"title": "Relations among complexity measures.",
|
| 493 |
+
"author": "Nicholas Pippenger and Michael J Fischer.",
|
| 494 |
+
"venue": "Journal of the ACM (JACM), 26(2):361\u2013381, 1979.",
|
| 495 |
+
"url": null
|
| 496 |
+
}
|
| 497 |
+
},
|
| 498 |
+
{
|
| 499 |
+
"38": {
|
| 500 |
+
"title": "Language models are unsupervised multitask learners.",
|
| 501 |
+
"author": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.",
|
| 502 |
+
"venue": "OpenAI Blog, 1(8), 2019.",
|
| 503 |
+
"url": null
|
| 504 |
+
}
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"39": {
|
| 508 |
+
"title": "Prompt programming for large language models: Beyond the few-shot paradigm.",
|
| 509 |
+
"author": "Laria Reynolds and Kyle McDonell.",
|
| 510 |
+
"venue": "In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1\u20137, 2021.",
|
| 511 |
+
"url": null
|
| 512 |
+
}
|
| 513 |
+
},
|
| 514 |
+
{
|
| 515 |
+
"40": {
|
| 516 |
+
"title": "Time-lock puzzles and timed-release crypto.",
|
| 517 |
+
"author": "Ronald L Rivest, Adi Shamir, and David A Wagner.",
|
| 518 |
+
"venue": "1996.",
|
| 519 |
+
"url": null
|
| 520 |
+
}
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"41": {
|
| 524 |
+
"title": "Mathematical discoveries from program search with large language models.",
|
| 525 |
+
"author": "Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan Ellenberg, Pengming Wang, Omar Fawzi, Pushmeet Kohli, and Alhussein Fawzi.",
|
| 526 |
+
"venue": "Nature, 2023.",
|
| 527 |
+
"url": null
|
| 528 |
+
}
|
| 529 |
+
},
|
| 530 |
+
{
|
| 531 |
+
"42": {
|
| 532 |
+
"title": "Bert rediscovers the classical nlp pipeline.",
|
| 533 |
+
"author": "Ian Tenney, Dipanjan Das, and Ellie Pavlick.",
|
| 534 |
+
"venue": "arXiv preprint arXiv:1905.05950, 2019.",
|
| 535 |
+
"url": null
|
| 536 |
+
}
|
| 537 |
+
},
|
| 538 |
+
{
|
| 539 |
+
"43": {
|
| 540 |
+
"title": "Llama 2: Open foundation and fine-tuned chat models.",
|
| 541 |
+
"author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.",
|
| 542 |
+
"venue": "arXiv preprint arXiv:2307.09288, 2023.",
|
| 543 |
+
"url": null
|
| 544 |
+
}
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"44": {
|
| 548 |
+
"title": "Solving olympiad geometry without human demonstrations.",
|
| 549 |
+
"author": "Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong.",
|
| 550 |
+
"venue": "Nature, 625(7995):476\u2013482, 2024.",
|
| 551 |
+
"url": null
|
| 552 |
+
}
|
| 553 |
+
},
|
| 554 |
+
{
|
| 555 |
+
"45": {
|
| 556 |
+
"title": "Visualizing attention in transformer-based language representation models.",
|
| 557 |
+
"author": "Jesse Vig.",
|
| 558 |
+
"venue": "arXiv preprint arXiv:1904.02679, 2019.",
|
| 559 |
+
"url": null
|
| 560 |
+
}
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"46": {
|
| 564 |
+
"title": "Towards understanding chain-of-thought prompting: An empirical study of what matters.",
|
| 565 |
+
"author": "Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, and Huan Sun.",
|
| 566 |
+
"venue": "arXiv preprint arXiv:2212.10001, 2022a.",
|
| 567 |
+
"url": null
|
| 568 |
+
}
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"47": {
|
| 572 |
+
"title": "Interpretability in the wild: a circuit for indirect object identification in gpt-2 small.",
|
| 573 |
+
"author": "Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt.",
|
| 574 |
+
"venue": "arXiv preprint arXiv:2211.00593, 2022b.",
|
| 575 |
+
"url": null
|
| 576 |
+
}
|
| 577 |
+
},
|
| 578 |
+
{
|
| 579 |
+
"48": {
|
| 580 |
+
"title": "Chain of thought prompting elicits reasoning in large language models.",
|
| 581 |
+
"author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou.",
|
| 582 |
+
"venue": "Advances in Neural Information Processing Systems, 2022.",
|
| 583 |
+
"url": null
|
| 584 |
+
}
|
| 585 |
+
},
|
| 586 |
+
{
|
| 587 |
+
"49": {
|
| 588 |
+
"title": "Thinking like transformers.",
|
| 589 |
+
"author": "Gail Weiss, Yoav Goldberg, and Eran Yahav.",
|
| 590 |
+
"venue": "In International Conference on Machine Learning, pp. 11080\u201311090. PMLR, 2021.",
|
| 591 |
+
"url": null
|
| 592 |
+
}
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"50": {
|
| 596 |
+
"title": "Relativized circuit complexity.",
|
| 597 |
+
"author": "Christopher B Wilson.",
|
| 598 |
+
"venue": "Journal of Computer and System Sciences, 31(2):169\u2013181, 1985.",
|
| 599 |
+
"url": null
|
| 600 |
+
}
|
| 601 |
+
},
|
| 602 |
+
{
|
| 603 |
+
"51": {
|
| 604 |
+
"title": "On layer normalization in the transformer architecture.",
|
| 605 |
+
"author": "Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, Liwei Wang, and Tieyan Liu.",
|
| 606 |
+
"venue": "In International Conference on Machine Learning, pp. 10524\u201310533. PMLR, 2020.",
|
| 607 |
+
"url": null
|
| 608 |
+
}
|
| 609 |
+
},
|
| 610 |
+
{
|
| 611 |
+
"52": {
|
| 612 |
+
"title": "Circuits and local computation.",
|
| 613 |
+
"author": "Andrew Chi-Chih Yao.",
|
| 614 |
+
"venue": "pp. 186\u2013196, 1989.",
|
| 615 |
+
"url": null
|
| 616 |
+
}
|
| 617 |
+
},
|
| 618 |
+
{
|
| 619 |
+
"53": {
|
| 620 |
+
"title": "Self-attention networks can process bounded hierarchical languages.",
|
| 621 |
+
"author": "Shunyu Yao, Binghui Peng, Christos Papadimitriou, and Karthik Narasimhan.",
|
| 622 |
+
"venue": "arXiv preprint arXiv:2105.11115, 2021.",
|
| 623 |
+
"url": null
|
| 624 |
+
}
|
| 625 |
+
}
|
| 626 |
+
],
|
| 627 |
+
"url": "http://arxiv.org/html/2402.12875v4"
|
| 628 |
+
}
|
20240921/2403.02615v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2403.02959v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2403.07483v2.json
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "DiabetesNet: A Deep Learning Approach to Diabetes Diagnosis",
|
| 3 |
+
"abstract": "Diabetes, resulting from inadequate insulin production or utilization, causes extensive harm to the body. Existing diagnostic methods are often invasive and come with drawbacks, such as cost constraints. Although there are machine learning models like Classwise k Nearest Neighbor (CkNN) and General Regression Neural Network (GRNN), they struggle with imbalanced data and result in under-performance. Leveraging advancements in sensor technology and machine learning, we propose a non-invasive diabetes diagnosis using a Back Propagation Neural Network (BPNN) with batch normalization, incorporating data re-sampling and normalization for class balancing. Our method addresses existing challenges such as limited performance associated with traditional machine learning. Experimental results on three datasets show significant improvements in overall accuracy, sensitivity, and specificity compared to traditional methods. Notably, we achieve accuracies of 89.81% in Pima diabetes dataset, 75.49% in CDC BRFSS2015 dataset, and 95.28% in Mesra Diabetes dataset. This underscores the potential of deep learning models for robust diabetes diagnosis. See project website\nhttps://steve-zeyu-zhang.github.io/DiabetesDiagnosis",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Diabetes Mellitus (DM) is a chronic disease, originating from the Greek word diabetes, characterized by persistently high blood glucose levels [28 ###reference_b28###]. It adversely affects the heart, blood vessels, eyes, kidneys, and nerves, doubling the risk of vascular disorders in individuals with diabetes [8 ###reference_b8###]. Evidence suggests a strong association between diabetes and certain malignancies (e.g., liver cancer) and other non-vascular illnesses [12 ###reference_b12###, 14 ###reference_b14###, 17 ###reference_b17###]. By the end of 2019, diabetes became the ninth leading cause of death, rising by 70% since 2000, with an 80% increase in male fatalities [25 ###reference_b25###]. Diabetes directly caused 1.5 million deaths worldwide, 48% before the age of 70 [9 ###reference_b9###]. Currently, 37.3 million people in the US, or 11.3% of the population, have diabetes, with 8.5 million undiagnosed individuals [11 ###reference_b11###]. Early diagnosis and treatment are crucial to prevent health risks as a \"Silent Killer\" [7 ###reference_b7###, 13 ###reference_b13###]. Implementing accurate prediction and monitoring approaches can significantly reduce the risk of developing the disease [29 ###reference_b29###].\nCurrently, the majority of methods for predicting and diagnosing diabetes still rely on blood glucose level measurement [40 ###reference_b40###]. Specifically, invasive blood glucose laboratory tests and glucometers are standard solutions for glucose monitoring at hospitals and homes, respectively [23 ###reference_b23###]. Although these methods can provide relatively accurate test results, some evident disadvantages, such as stringent demand for skills and types of equipment, prohibitive costs, time-consuming, and the pain associated with testing, cannot be ignored [37 ###reference_b37###].\nIn comparison, machine learning and deep learning-based diabetes diagnosis gather data from real-world datasets, which does not require special instruments and has the advantages of low cost and high efficiency. The most commonly used dataset is the Pima Diabetes dataset carried out by the US National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) [3 ###reference_b3###, 19 ###reference_b19###, 20 ###reference_b20###] and available at the University of California Irvine Machine Learning Repository [5 ###reference_b5###]. There are other well-known datasets that can be used in Diabetes diagnosis, such as CDC BRFSS 2015 Diabetes Health Indicators Dataset [10 ###reference_b10###, 39 ###reference_b39###, 34 ###reference_b34###], and BIT Mesra Diabetes Dataset 2019 [35 ###reference_b35###, 36 ###reference_b36###]. In the datasets utilized, the types of data encompass various symptomatic observations and body measurements associated with diabetes. The outcome or target variable is binary, indicating whether an individual has diabetes (positive) or does not have diabetes (negative). It is important to note that all the features employed in the datasets are acquired through non-invasive methods. However, it is crucial to acknowledge the absence of the gold standard diagnostic criteria for diabetes, such as blood glucose level, in the data collection process.\nSome machine learning methods have been proposed for diabetes prediction and achieved some encouraging progress with the three datasets mentioned earlier, for example, Classwise k Nearest Neighbor (CkNN) from Christobel\u2019s work [6 ###reference_b6###] and regression neural network (GRNN) from Kayaer\u2019s work [10 ###reference_b10###]. They tend to be statistical learning methods or feed-forward neural networks. Moreover, most of these models, such as Multi-Layer Feed Forward Neural Networks (MLFNN) from Kumar\u2019s work [21 ###reference_b21###], have been only validated on a single dataset without indicating sensitivity and specificity, which leads to a relatively limited persuasiveness. Smith et al. [33 ###reference_b33###] designed a prediction model based on an early neural network model, ADAP [32 ###reference_b32###, 31 ###reference_b31###], which is an adaptive learning method that generates and executes digital analogs of perceptron-like devices. They tested it on the Pima Indians diabetes dataset, and the performance was measured by sensitivity and specificity, which achieved 76% at the crossover point. Wahba et al. [38 ###reference_b38###] applied two models on diabetes datasets, penalized log-likelihood smoothing spline analysis of variance (PSA) and Generalized Linear Models (GLIM) [24 ###reference_b24###], which achieved accuracies of 72% and 74%, respectively. Breault et al. [4 ###reference_b4###] implemented a data mining algorithm, Rough sets [26 ###reference_b26###], with the standard/tuned voting method (RSES) on the Pima diabetes dataset. Out of 392 complete cases, the model achieved a predictive accuracy of 73.8% with a 95% CI of (71.3%, 76.3%). Christobel et al. [6 ###reference_b6###] addressed the missing value in the Pima diabetes dataset using the mean method and implemented a new Classwise k Nearest Neighbor (CkNN) algorithm for the prediction of diabetes. Through 10-fold cross-validation, the algorithm has achieved an accuracy of 78.16%. Kumari et al. [22 ###reference_b22###] proposed a classifier using Support Vector Machine (SVM) on Pima Indians diabetes dataset. The experimental results obtained an accuracy of 75.5% for RBF kernel SVM and 78.2% for SVM classification. Ahmad et al. [1 ###reference_b1###] designed a hybrid method that consists of an improved genetic algorithm (GA) for simultaneous parameter tuning and feature selection and a multi-layer perceptron (MLP) for classification. The model they developed obtained an accuracy of 80.4% on the Pima dataset. Kayaer et al. [18 ###reference_b18###] developed a model based on General Regression Neural Network (GRNN), which consists of an input layer, two hidden layers (32 and 16 neurons, respectively), and an output layer with only one neuron. The classifier was examined on the Pima Indian dataset and achieved an accuracy of 80.21%. Kumar et al. [21 ###reference_b21###] developed a classification model based on Multi-Layer Feed Forward Neural Networks (MLFNN), and achieved 81.73% accuracy on the Pima diabetes dataset using the mean method for missing values. Polat et al. [27 ###reference_b27###] developed a classification model on the Pima dataset using Generalized Discriminant Analysis combined with Least Square Support Vector Machine (GDA-LS-SVM). Using 10-fold cross-validation, they achieved 79.16% accuracy.\nHowever, these methods may also have some imperfections. For instance, some methods, such as the ADAP algorithm from Smith\u2019s work [32 ###reference_b32###, 31 ###reference_b31###] and the Rough set algorithm from Breault\u2019s work [4 ###reference_b4###], may have trained with imbalanced data directly without using the proper data preprocessing methods. This might lead the classifier to be biased toward the majority (negative) class and result in low sensitivity. Others may be designed a model that is not powerful enough or used an inappropriate model as the backbone for binary classification.\nThe primary objective of this paper is to propose and develop a deep-learning model and pipeline specifically designed for diabetes diagnosis. Our focus lies in leveraging data obtained through non-invasive methods as the sole input for our model. By solely relying on non-invasive data collection approaches, we aim to enhance the practicality and feasibility of the proposed solution for real-world applications. The developed model and pipeline strive to achieve accurate and reliable diabetes diagnosis based solely on non-invasive data, thereby mitigating the need for invasive diagnostic procedures and improving patient experience and convenience. We proposed a model based on Back Propagation Neural Network (BPNN) combined with batch normalization. The main contribution of this paper could be summarized as follows.\nWe improved the sensitivity through implementing undersample-balancing in the procedure of data preprocessing.\nWe proposed a deep learning model based on Back Propagation Neural Network (BPNN) for diabetes diagnosis. Specifically, by updating losses and biases through backward propagation, the accuracy of samples that are difficult to classify in some datasets has also been improved substantially.\nWe conduct experiments on four distinct real-world datasets with different features and dimensions. The horizontal comparison of the results indicates the superior performance of BPNN in terms of accuracy, among other approaches."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Methodology",
|
| 15 |
+
"text": "###figure_1### In this section, we outline our deep learning model for diabetes diagnosis and its structure. We chose a Back-Propagation Neural Network (BPNN) due to its superior representation and feature extraction capabilities compared to other statistical machine learning methods.\nThe BP algorithm works by iteratively updating the network\u2019s weights and biases based on the error between the predicted outputs and the actual targets from a set of training examples. The algorithm starts with a forward pass (feedforward) to compute the activations of each neuron in the network and then calculates the output error. It then propagates this error backward through the layers (backpropagate error), computing the errors for each neuron in each layer. Finally, the gradients of the cost function with respect to the weights and biases are computed using the errors, which are used to update the weights and biases in the network, thereby improving its ability to diagnose diabetes accurately.\nThe BPNN is shown in Figure 2 ###reference_###, which is built up by full connections of an input layer, three hidden layers, and an output layer. The input layer possesses the same number of neurons as the features. The number of neurons in the hidden layers is 64, 32, and 16, respectively. Eventually, we have two neurons in the output layer referring to the two output classes.\nIn Figure 2 ###reference_###, each fully connected arrow stands for a feed-forward process. The sigmoid activation function has been implemented for each layer, and the output of each layer has been normalized.\nWe implement batch normalization [15 ###reference_b15###] to improve the training speed and stability, as well as to mitigate the issue of internal covariate shift, thus enhancing the overall performance of our BPNN for diabetes diagnosis.\nThe algorithm computes the mean and variance of inputs within a batch of training samples and then normalizes the inputs by subtracting the mean and dividing by the square root of the variance. This normalization step helps stabilize and speed up the training process. During inference, batch normalization uses running averages of the batch mean and variance to normalize the inputs, along with scaling and shifting parameters to obtain the final outputs of the layers, ensuring the network performs well on new, unseen data.\nFor the back propagation, the figure demonstrates a standard or typical MLP, not a Single-Layer perceptron. For standard MLP, it uses BP to update weight & bias. We used the cross-entropy loss as the loss function and the adaptive moment estimation (ADAM) to search for the minima of the loss function.\nThe hyperparameter mentioned above is determined by grid search, which is explained in detail in Section 3.2 ###reference_###. In this process, a predefined set of hyperparameter values is defined for each hyperparameter (e.g., hidden layers, activation function, optimizer, mini-batch size), and the model\u2019s performance is evaluated for all possible combinations of these values using cross-validation. The combination that results in the best performance metric (e.g., accuracy, loss) on the validation set is then selected as the optimal set of hyperparameters for the model. Utilizing grid search, we employed an exhaustive search technique to identify the optimal hyperparameter configuration for the proposed Back Propagation Neural Network (BPNN) model. This systematic approach enabled us to maximize the performance of the BPNN by selecting the combination of hyperparameters that yielded the highest performance metrics.\n###figure_2###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Experiment",
|
| 21 |
+
"text": "In this section, we evaluate the effectiveness of BPNN model on Pima Indian diabetes dataset and compare it with some statistical learning methods, other deep learning methods, and some existing methods done by related works.\nThe stages of the experiment could be generally described as (1) Data Preprocessing, (2) Hyperparameter tuning of BPNN, and (3) Validation, which is shown in Figure 1 ###reference_###. The proposed pipeline\u2019s workflow involves three main steps for improving the performance of the model in handling unbalanced data. Firstly, an undersampling technique is applied to balance the class distribution in the dataset. Secondly, standardization is performed to scale the data, ensuring consistency in feature magnitudes. Lastly, the processed data is used to train a Back Propagation Neural Network (BPNN) model, adopting a five-fold cross-validation approach to assess its performance and ensure robustness in the evaluation process."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Data Preprocessing",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.1.1",
|
| 31 |
+
"parent_section_id": "3.1",
|
| 32 |
+
"section_name": "3.1.1 Overview of Dataset",
|
| 33 |
+
"text": "Pima Indian diabetes dataset is provided by National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) and the Applied Physics Laboratory of the Johns Hopkins University [3 ###reference_b3###]. The dataset provided 768 females at least 21 years old of Pima Indian heritage who responded to the survey. The dataset consists of several medical predictors (i.e. independent variables) and a target (dependent) variable, Outcome. Independent variables include the number of pregnancies the sample has had, their age, BMI, blood pressure (BP), insulin level, and so on. The correlation matrix of Pima dataset is shown in Figure 3 ###reference_###. Based on specific diagnostic metrics present in the dataset, the goal of the dataset is to diagnostically forecast whether a patient has diabetes or not.\n###figure_3###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1.2",
|
| 37 |
+
"parent_section_id": "3.1",
|
| 38 |
+
"section_name": "3.1.2 Data Balancing",
|
| 39 |
+
"text": "When one class dominates the other classes in a dataset relative to the target class variable, the dataset is said to be imbalanced [16 ###reference_b16###]. However, classification algorithms are designed to assume that the dataset is balanced [16 ###reference_b16###]. When a classifier is trained using an imbalanced dataset, it will probably be biased towards the majority class, which means that the performance of the classifier will be better at predicting the majority class than the minority class [16 ###reference_b16###]. Eventually, it will result in low sensitivity. Thus, an imbalanced dataset will introduce bias during training. Therefore, balancing imbalanced datasets is one of the most essential methods in data preprocessing since it will help reduce bias in the prediction, and thereby enhance the performance of the classifier.\nThe initial Pima dataset exhibited an imbalanced distribution, comprising 268 positive instances (with diabetes) and 500 negative instances (without diabetes). The Pima Indian diabetes dataset is a highly imbalanced data since the size of the negative class is significantly larger than the size of the positive class. To address this class imbalance, we applied a data undersampling technique. Undersampling is a technique that balances the dataset by randomly reducing the size of the majority class until reaching the size of the minority class. Despite it might discard some samples from the original dataset, it will not introduce any bias to training and is considered to be one of the most widely used data balancing methods. Consequently, the dataset was rebalanced, resulting in an equal number of instances, namely 268 instances in each class."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1.3",
|
| 43 |
+
"parent_section_id": "3.1",
|
| 44 |
+
"section_name": "3.1.3 Data Scaling",
|
| 45 |
+
"text": "It is well known that most machine learning methods evaluate the data distance or similarity (e.g., Euclidean distance) to make inferences and predictions. However, few features are measured on the same scale. Specifically, the majority of the features are either different in magnitudes or different in units. Hence, scaling the data will bring every feature the same contribution to the classification, which will enhance the performance of classification algorithms [2 ###reference_b2###]. Scaling will also reduce the time spent training. If the values of the features are closer to each other, it will accelerate the process for the classifier to understand the data and speed up the process of convergence of gradient descent [30 ###reference_b30###, 15 ###reference_b15###].\nThere are two major approaches for scaling the data: normalization and standardization. Hence, we choose standardization as our scaling method since it does not harm the position of outliers, wherein the normalization captures all the data in a certain range. The distribution of the features before and after scaling is shown in Figure 4 ###reference_###.\nFor standardization, we have\nwhere refers to the standard deviation of .\n###figure_4###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1.4",
|
| 49 |
+
"parent_section_id": "3.1",
|
| 50 |
+
"section_name": "3.1.4 Data Visualization",
|
| 51 |
+
"text": "The visualization process involved two dimensionality reduction techniques: Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE). PCA captured the most important information in the data, presenting it in a way that highlights similarities and differences. t-SNE, on the other hand, focused on visualizing high-dimensional data by creating a probability distribution that emphasized similarities and minimized the divergence between high and low-dimensional representations. While t-SNE is better suited for non-linear data, it comes with a higher computational complexity. Both PCA and t-SNE were employed to reduce the data into two dimensions for visualization purposes, which is shown in Figure 5 ###reference_###.\n###figure_5###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Hyperparameter Tuning",
|
| 57 |
+
"text": "Grid search, also known as parameter sweep, is a hyperparameter optimization method performed by a thorough search across a manually chosen subset of a learning algorithm\u2019s hyperparameter space. An evaluation on a hold-out validation set or cross-validation on the training set are two common ways to measure performance metrics for grid search algorithms. Prior to conducting a grid search, manually established boundaries and discretization may be required since the parameter space of a classifier may comprise real-valued or unbounded value spaces for some parameters.\nTo implement grid search for hyperparameter tuning, we need to determine a subset of hyperparameter space as the grid search dictionary. Eventually, we derived the optimal hyperparameters for BPNN, which is demonstrated in Figure 1 ###reference_###."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.3",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Validation",
|
| 63 |
+
"text": ""
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.3.1",
|
| 67 |
+
"parent_section_id": "3.3",
|
| 68 |
+
"section_name": "3.3.1 Cross Validation",
|
| 69 |
+
"text": "K-fold cross-validation is one of the most widely used approaches for parameter tuning during training and performance evaluation of a classifier.\nThe dataset has been first split into two subsets, 80% for training and 20% for testing purposes. During the training process, the training data is randomly split into 5 folds,\nFor each iteration, we use four of them for training and one for validating, and these training data were used to train and tune the hyperparameters of the BP neural network. Once the iteration is completed, the model has already been fine-tuned and will be validated using the testing data."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.3.2",
|
| 73 |
+
"parent_section_id": "3.3",
|
| 74 |
+
"section_name": "3.3.2 Evaluation Metrics",
|
| 75 |
+
"text": "Several evaluation metrics were used to test the performance of the developed model. One of the most well-known indicators is accuracy, which is defined as the percentage of all identifications that are actually correct. Moreover, to ensure the model is not biased towards a single class, we also use sensitivity and specificity, which are the true positive rate and true negative rate, respectively."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.4",
|
| 79 |
+
"parent_section_id": "3",
|
| 80 |
+
"section_name": "Results",
|
| 81 |
+
"text": "We investigate the performance of the proposed BPNN on testing data and achieved 89.81% for accuracy, 89.29% for sensitivity, and 90.38% for specificity.\nWe also compared our BPNN model with several machine learning methods. Moreover, we also compared with some of the best performing related works, including the Classwise k Nearest Neighbor (CkNN) from Christobel\u2019s work [6 ###reference_b6###], the improved genetic algorithm and multi-layer perceptron (GA-MLP) from Ahmad\u2019s work [1 ###reference_b1###], General\nregression neural network (GRNN) from Kayaer\u2019s work [10 ###reference_b10###], Multi-Layer Feed Forward Neural Networks (MLFNN) from Kumar\u2019s work [21 ###reference_b21###], and Generalized Discriminant Analysis combined Least Square Support Vector Machine (GDA-LS-SVM) from Polat\u2019s work [18 ###reference_b18###].\nThe results on the Pima Indian diabetes dataset and other datasets are shown in Table 2 ###reference_###. Our proposed BPNN outperformed CkNN by 11.65%, GDA-LS-SVM by 10.65%, GA-MLP by 9.41%, GRNN by 9.6%, and MLFNN by 8.03% in the Pima diabetes dataset. The underperformance of the least performing models compared with our model can be attributed to two main factors. Firstly, these models did not utilize data balancing and scaling techniques, resulting in an unbalanced training data that tends to favor the major class, thereby significantly impacting their performance, as seen in GA-MLP, GRNN, and MLFNN. Secondly, traditional statistical machine learning methods, such as CkNN and GDA-LS-SVM, lack the capability to extract deep abstract features, which hinders their performance when compared to deep neural networks. Consequently, the deep neural network serves as an effective encoder for feature extraction, contributing to the classifier\u2019s superior performance. It is obvious that the proposed method has significantly improved the accuracy of diabetes diagnosis compared with other machine learning methods."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Conclusion",
|
| 87 |
+
"text": "In this study, we introduced an innovative diabetes diagnosis model that leverages the Back Propagation Neural Network (BPNN) in synergy with batch normalization. Our model presents a noteworthy advancement in enhancing the accuracy of diabetes diagnosis across authentic datasets. The substantial performance improvement demonstrated not only surpasses related models but also potentially positions it as a benchmark, signifying its pivotal role in shaping the landscape of diabetes diagnosis. Despite limited dataset size and features, our method showed promising results across multiple datasets for diabetes diagnosis. Moving forward, our future work will involve refining and validating our approach with more comprehensive datasets to enhance its robustness and generalizability. Additionally, we aim to improve our diagnostic approach through data processing refinement and feature engineering."
|
| 88 |
+
}
|
| 89 |
+
],
|
| 90 |
+
"appendix": [],
|
| 91 |
+
"tables": {
|
| 92 |
+
"1": {
|
| 93 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Chosen subset of hyperparameter space, and optimal hyperparameters for BPNN, which are labeled in red.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S3.T1.1.1.1.1\">Hyperparameter</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1.2\">Value</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.1.2.1.1\">Hidden Layer</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.2.1.2\">[16,8,4],[32,16,8],<span class=\"ltx_text\" id=\"S3.T1.1.2.1.2.1\" style=\"color:#FF0000;\">[64,32,16]</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T1.1.3.2.1\">Activation</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.3.2.2\">\n<span class=\"ltx_text\" id=\"S3.T1.1.3.2.2.1\" style=\"color:#FF0000;\">Sigmoid</span>, ReLU</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T1.1.4.3.1\">Optimizer</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.3.2\">SGD, <span class=\"ltx_text\" id=\"S3.T1.1.4.3.2.1\" style=\"color:#FF0000;\">Adam</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S3.T1.1.5.4.1\">Mini Batch</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.1.5.4.2\">8, <span class=\"ltx_text\" id=\"S3.T1.1.5.4.2.1\" style=\"color:#FF0000;\">16</span>, 32</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 94 |
+
"capture": "Table 1: Chosen subset of hyperparameter space, and optimal hyperparameters for BPNN, which are labeled in red."
|
| 95 |
+
},
|
| 96 |
+
"2": {
|
| 97 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Comparative results on different datasets with various models. The cells with \u2018-\u2019 indicate that certain comparative studies did not assess their models on specific datasets.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T2.1\" style=\"width:433.6pt;height:253pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-92.6pt,53.9pt) scale(0.700736053814534,0.700736053814534) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S3.T2.1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_t\" colspan=\"3\" id=\"S3.T2.1.1.1.1.2\">NIDDK Pima Indian Diabetes Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_t\" colspan=\"3\" id=\"S3.T2.1.1.1.1.3\">CDC BRFSS2015 Database</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_t\" colspan=\"3\" id=\"S3.T2.1.1.1.1.4\">BIT Mesra Diabetes Dataset</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.2.2.1\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.1.1.2.2.2\">Test. Acc.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.1.1.2.2.3\">Sensitivity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.2.2.4\">Specificity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.1.1.2.2.5\">Test. Acc.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.1.1.2.2.6\">Sensitivity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.2.2.7\">Specificity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.1.1.2.2.8\">Test. Acc.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.1.1.2.2.9\">Sensitivity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.1.1.2.2.10\">Specificity</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.1\">LDA</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.2\">0.7222</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.3\">0.6721</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.4\">0.7872</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.5\">0.7416</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.6\">0.7767</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.7\">0.7064</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.8\">0.8868</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.9\">0.9048</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.3.1.10\">0.875</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.4.2.1\">KNN</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.2\">0.8148</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.3\">0.7705</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.4.2.4\">0.8723</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.5\">0.7376</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.6\">0.7952</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.4.2.7\">0.6792</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.8\">0.9151</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.9\">0.9524</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.4.2.10\">0.8906</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.5.3.1\">Logistic Regression</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.2\">0.6852</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.3\">0.6230</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.5.3.4\">0.7660</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.5\">0.7418</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.6\">0.7685</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.5.3.7\">0.7148</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.8\">0.8396</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.9\">0.9286</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.5.3.10\">0.7813</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.6.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.6.4.1\">SVM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.2\">0.7130</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.3\">0.6393</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.6.4.4\">0.8085</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.5\">0.7411</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.6\">0.7906</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.6.4.7\">0.6908</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.8\">0.8491</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.9\">0.8571</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.6.4.10\">0.8438</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.7.5.1\">Decision Trees</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.2\">0.7037</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.3\">0.6885</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.7.5.4\">0.7234</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.5\">0.7364</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.6\">0.7622</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.7.5.7\">0.7102</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.8\">0.9057</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.9\">0.9524</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.7.5.10\">0.875</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.8.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.8.6.1\">Random Forest</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.8.6.2\">0.7222</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.8.6.3\">0.7213</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.8.6.4\">0.7234</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.8.6.5\">0.7304</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.8.6.6\">0.7673</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.8.6.7\">0.6930</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.8.6.8\">0.8774</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.8.6.9\">0.9286</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.8.6.10\">0.8438</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.9.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.9.7.1\">Bagging</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.9.7.2\">0.6944</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.9.7.3\">0.6230</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.9.7.4\">0.7872</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.9.7.5\">0.7477</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.9.7.6\">0.7983</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.9.7.7\">0.6964</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.9.7.8\">0.8868</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.9.7.9\">0.9524</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.9.7.10\">0.8438</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.10.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.10.8.1\">XGBoost</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.10.8.2\">0.7870</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.10.8.3\">0.7377</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.10.8.4\">0.8511</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.10.8.5\">0.7505</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.10.8.6\"><span class=\"ltx_text\" id=\"S3.T2.1.1.10.8.6.1\" style=\"color:#FF0000;\">0.7987</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.10.8.7\">0.7017</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.10.8.8\">0.9245</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.10.8.9\">0.9524</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.10.8.10\">0.9062</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.11.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.11.9.1\">K-Means Clustering</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.11.9.2\">0.6481</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.11.9.3\">0.4590</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.11.9.4\">0.8936</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.11.9.5\">0.6653</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.11.9.6\">0.5069</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.11.9.7\"><span class=\"ltx_text\" id=\"S3.T2.1.1.11.9.7.1\" style=\"color:#FF0000;\">0.8259</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.11.9.8\">0.7264</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.11.9.9\">0.4762</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.11.9.10\">0.8906</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.12.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.12.10.1\">SOM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.12.10.2\">0.7130</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.12.10.3\">0.6721</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.12.10.4\">0.7660</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.12.10.5\">0.6611</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.12.10.6\">0.5118</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.12.10.7\">0.8125</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.12.10.8\">0.6698</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.12.10.9\">0.5714</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.12.10.10\">0.7344</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.13.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.13.11.1\">ResNet-14</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.13.11.2\">0.7963</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.13.11.3\">0.7049</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.13.11.4\"><span class=\"ltx_text\" id=\"S3.T2.1.1.13.11.4.1\" style=\"color:#FF0000;\">0.9149</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.13.11.5\">0.7492</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.13.11.6\">0.7790</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.13.11.7\">0.7187</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.13.11.8\">0.9245</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.13.11.9\">0.9524</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.13.11.10\">0.9063</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.14.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.14.12.1\">ResNet-50</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.14.12.2\">0.7870</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.14.12.3\">0.7706</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.14.12.4\">0.8085</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.14.12.5\">0.7442</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.14.12.6\">0.7722</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.14.12.7\">0.7158</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.14.12.8\">0.9151</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.14.12.9\">0.9286</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.14.12.10\">0.9062</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.15.13\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.15.13.1\">CkNN</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.15.13.2\">0.7816</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.15.13.3\">0.6184</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.15.13.4\">0.8738</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.15.13.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.15.13.6\">\u2014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.15.13.7\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.15.13.8\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.15.13.9\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.15.13.10\">\u2014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.16.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.16.14.1\">GDA-LS-SVM 3</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.16.14.2\">0.7916</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.16.14.3\">0.8333</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.16.14.4\">0.8205</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.16.14.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.16.14.6\">\u2014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.16.14.7\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.16.14.8\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.16.14.9\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.16.14.10\">\u2014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.17.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.17.15.1\">GA-MLP</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.17.15.2\">0.8040</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.17.15.3\">\u2014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.17.15.4\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.17.15.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.17.15.6\">\u2014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.17.15.7\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.17.15.8\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.17.15.9\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.17.15.10\">\u2014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.18.16\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.18.16.1\">GRNN</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.18.16.2\">0.8021</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.18.16.3\">\u2014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.18.16.4\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.18.16.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.18.16.6\">\u2014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.18.16.7\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.18.16.8\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.18.16.9\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.18.16.10\">\u2014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.19.17\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.1.1.19.17.1\">MLFNN</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.19.17.2\">0.8173</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.19.17.3\">\u2014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.19.17.4\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.19.17.5\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.19.17.6\">\u2014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.1.19.17.7\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.19.17.8\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.19.17.9\">\u2014</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.1.19.17.10\">\u2014</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.20.18\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S3.T2.1.1.20.18.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.20.18.1.1\">DiabetesNet (Ours)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.1.1.20.18.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.20.18.2.1\" style=\"color:#FF0000;\">0.8981</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.1.1.20.18.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.20.18.3.1\" style=\"color:#FF0000;\">0.8929</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T2.1.1.20.18.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.20.18.4.1\">0.9038</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.1.1.20.18.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.20.18.5.1\" style=\"color:#FF0000;\">0.7549</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.1.1.20.18.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.20.18.6.1\">0.7977</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T2.1.1.20.18.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.20.18.7.1\">0.7112</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.1.1.20.18.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.20.18.8.1\" style=\"color:#FF0000;\">0.9528</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.1.1.20.18.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.20.18.9.1\" style=\"color:#FF0000;\">1.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.1.1.20.18.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.20.18.10.1\" style=\"color:#FF0000;\">0.9219</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 98 |
+
"capture": "Table 2: Comparative results on different datasets with various models. The cells with \u2018-\u2019 indicate that certain comparative studies did not assess their models on specific datasets."
|
| 99 |
+
}
|
| 100 |
+
},
|
| 101 |
+
"image_paths": {
|
| 102 |
+
"1": {
|
| 103 |
+
"figure_path": "2403.07483v2_figure_1.png",
|
| 104 |
+
"caption": "Figure 1: Workflow of Proposed Method: The pipeline encompasses crucial components, including data undersampling to address class imbalance in the dataset. The Workflow of our proposed method illustrates the data scaling procedure for effective feature normalization. The backbone of the pipeline consists of a Back Propagation Neural Network (BPNN) architecture, enhanced with batch normalization, to facilitate automatic diabetes diagnosis. This comprehensive pipeline demonstrates potential for accurate and automated diabetes classification.",
|
| 105 |
+
"url": "http://arxiv.org/html/2403.07483v2/x1.png"
|
| 106 |
+
},
|
| 107 |
+
"2": {
|
| 108 |
+
"figure_path": "2403.07483v2_figure_2.png",
|
| 109 |
+
"caption": "Figure 2: Structure of BPNN",
|
| 110 |
+
"url": "http://arxiv.org/html/2403.07483v2/x2.png"
|
| 111 |
+
},
|
| 112 |
+
"3": {
|
| 113 |
+
"figure_path": "2403.07483v2_figure_3.png",
|
| 114 |
+
"caption": "Figure 3: Correlation matrix of Pima dataset: The figure displays the correlation matrix of the Pima dataset, providing a visual representation of the interrelationships between the variables within the dataset.",
|
| 115 |
+
"url": "http://arxiv.org/html/2403.07483v2/extracted/5869729/pima_heatmap.png"
|
| 116 |
+
},
|
| 117 |
+
"4": {
|
| 118 |
+
"figure_path": "2403.07483v2_figure_4.png",
|
| 119 |
+
"caption": "Figure 4: The figure displays the feature distributions for diabetes diagnosis in the dataset before (top sub-figure) and after (bottom sub-figure) scaling using standardization. Standardization has successfully transformed the features to a comparable magnitude, resulting in a more uniform distribution, facilitating the training process and enhancing the performance of the Back Propagated diabetes diagnosis model.",
|
| 120 |
+
"url": "http://arxiv.org/html/2403.07483v2/extracted/5869729/pima_scale.png"
|
| 121 |
+
},
|
| 122 |
+
"5": {
|
| 123 |
+
"figure_path": "2403.07483v2_figure_5.png",
|
| 124 |
+
"caption": "Figure 5: The plot compares the distribution of positive and negative samples using two methods, PCA (linear dimensionality reduction) and t-SNE (nonlinear dimensionality reduction), providing a comprehensive visualization of their distribution in the dataset.",
|
| 125 |
+
"url": "http://arxiv.org/html/2403.07483v2/extracted/5869729/pima_2d.png"
|
| 126 |
+
}
|
| 127 |
+
},
|
| 128 |
+
"validation": true,
|
| 129 |
+
"references": [],
|
| 130 |
+
"url": "http://arxiv.org/html/2403.07483v2"
|
| 131 |
+
}
|
20240921/2403.08214v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2403.10081v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2403.11693v3.json
ADDED
|
@@ -0,0 +1,363 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Beamforming Design for Semantic-Bit Coexisting Communication System",
|
| 3 |
+
"abstract": "Semantic communication (SemCom) is emerging as a key technology for future sixth-generation (6G) systems.\nUnlike traditional bit-level communication (BitCom), SemCom directly optimizes performance at the semantic level,\nleading to superior communication efficiency. Nevertheless, the task-oriented nature of SemCom\nrenders it challenging to completely replace BitCom.\nConsequently,\nit is desired to consider a semantic-bit coexisting communication system, where\na base station (BS) serves SemCom users (sem-users) and BitCom users (bit-users) simultaneously.\nSuch a system faces severe and heterogeneous inter-user interference.\nIn this context, this paper provides a new semantic-bit coexisting communication framework and proposes a spatial beamforming scheme to accommodate both types of users.\nSpecifically, we consider maximizing the semantic rate for semantic users while ensuring the quality-of-service (QoS) requirements for bit-users.\nDue to the intractability of obtaining the exact closed-form expression of the semantic rate, a data driven method is first applied to attain an approximated expression via data fitting. With the resulting complex transcendental function,\nmajorization minimization (MM)\nis adopted to convert the original formulated problem into a multiple-ratio problem,\nwhich allows fractional programming (FP) to be used to further transform the problem into an inhomogeneous quadratically constrained quadratic programs (QCQP) problem.\nSolving the problem leads to a semi-closed form solution with undetermined Lagrangian factors that can be updated by a fixed point algorithm.\nThis method is referred to as the MM-FP algorithm. Additionally, inspired by the semi-closed form solution, we also propose a low-complexity version of the MM-FP algorithm, called the low-complexity MM-FP (LP-MM-FP), which alleviates the need for iterative optimization of beamforming vectors. Extensive simulation results demonstrate that the proposed MM-FP algorithm outperforms conventional beamforming algorithms such as zero-forcing (ZF), maximum ratio transmission (MRT), and weighted minimum mean-square error (WMMSE). Moreover, the proposed LP-MMFP algorithm achieves comparable performance with the WMMSE algorithm but with lower computational complexity.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "According to Shannon and Weaver [1 ###reference_b1###], communication could be classified into three levels: technical level, which concerns how accurately the symbols of communication are transmitted;\nsemantic level, which concerns how precisely the transmitted symbols convey the desired meaning;\nand efficiency level, which concerns how effectively the received meaning affect behavior in the desired way.\nIn the past forty years, researchers mainly focused on the first level design, driving the evolution of mobile communication systems from the first generation (1G) to the fifth generation (5G).\nThe transmission rate has been significantly improved and the system capacity is gradually approaching the Shannon limit. However, the rapid growth of communication demand in modern society shows no signs of stopping.\nSpecifically,\nthe upcoming sixth generation (6G) is expected to achieve transmission rates that are ten times faster than those of 5G [2 ###reference_b2###], which will enable the support of numerous new applications\nincluding virtual and augmented reality (VR/AR), smart factories, intelligent transportation systems, etc [3 ###reference_b3###].\nThis thus prompts an active research area that rethinks the communication systems at the semantic even effectiveness level."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "1.1",
|
| 13 |
+
"parent_section_id": "1",
|
| 14 |
+
"section_name": "Semantic Communication",
|
| 15 |
+
"text": "To build communication systems at the semantic level, semantic communication (SemCom) that mines the semantic information from the source, has emerged as one of the most popular candidate technologies in 6G. SemCom shifts the research focus from compression and transmission of digital bit information to the representation and delivery of semantics, driven by knowledge and logic [4 ###reference_b4###].\nThe authors in [5 ###reference_b5###] first explored the definition of semantic information, which is based on the logical probability over language content. Building on this definition, the authors in [6 ###reference_b6###] further proposed a general transmission paradigm that utilizes the shared knowledge base for SemCom. Then, a semantic communication framework was proposed in [7 ###reference_b7###] to minimize the end-to-end average semantic error. Despite these advancements, SemCom is still in its early stage due to the challenges associated with extracting semantics across common data modalities.\nRecently, artificial intelligence (AI) has shown its significant potential in semantic representation and reconstruction. For semantics extraction, the authors in [8 ###reference_b8###] considered using neural networks to extract the knowledge graph behind images, thereby enabling the effective delivery of semantic information by accurately transmitting the knowledge graph. Given the computation cost of semantic extraction, the authors in [9 ###reference_b9###] further explored joint computation and communication optimization for knowledge graph transmission. For semantics reconstruction, the concept of deep joint source and channel coding (DeepJSCC) has emerged.\nCompared with the traditional bit-level digital communication (BitCom) framework that adopts separate source and channel coding (SSCC) for minimizing bit/symbol error rate, DeepJSCC-based SemCom embraces joint source and channel coding (JSCC) through neural network, which enables the extraction of semantic information and demonstrates a better transmission efficiency compared with BitCom [10 ###reference_b10###].\nThe authors in [11 ###reference_b11###] proposed to use neural network to achieve JSCC for image recovery, and optimized the system performance through end-to-end learning under the criteria of mean square error (MSE).\nThen, the authors in [10 ###reference_b10###] incorporated transformer and proposed DeepSC, which is shown to outperform\nBitCom, especially in the low signal-to-noise (SNR) regime.\nBased on these pioneering works [11 ###reference_b11###, 10 ###reference_b10###], SemCom has then been extensively studied under different data modalities, including image [12 ###reference_b12###, 13 ###reference_b13###], text [14 ###reference_b14###], speech [15 ###reference_b15###], video [16 ###reference_b16###], and multimodal data [17 ###reference_b17###].\nDespite the potential performance gain of SemCom,\ncritical concerns about its practical deployment remain.\nFor example,\nearly JSCC based SemCom systems employ analog symbol transmission [11 ###reference_b11###, 10 ###reference_b10###, 12 ###reference_b12###],\nwhile it has been verified that digital transmission is more reliable and secure,\nas well as cost-effective in hardware implementation.\nThis prompts the development of digital SemCom by designing\ncodebooks [18 ###reference_b18###] and quantization methods [19 ###reference_b19###, 20 ###reference_b20###] for semantic information.\nBesides, the current SemCom heavily relies on neural networks, which are prone to overfitting the training data collected under certain limited scenarios and thus lack of generalization capability to deal with\nthe challenges brought by the dynamic wireless environment.\nPrompted by this, authors in [21 ###reference_b21###] proposed an attention-based JSCC scheme that uses channel-wise soft attention to scale features according to SNR conditions, which enables it applicable to scenarios with a broad range of SNRs through a single model.\nThen, given the multiple antenna cases, a channel-adaptive JSCC scheme that exploits the channel state information (CSI) and SNR through attention mechanism was further proposed in [22 ###reference_b22###]."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "1.2",
|
| 19 |
+
"parent_section_id": "1",
|
| 20 |
+
"section_name": "Motivations",
|
| 21 |
+
"text": "Although SemCom has shown great potential for 6G, there is a critical issue that requires further investigation: Can SemCom completely replace BitCom?\nWe believe the answer is no.\nThis is because the task-oriented nature of SemCom implies that it needs to be tailored for each specific task, which renders it not suitable for generic transmission tasks.\nAs a result, we envision that future 6G network will see the co-existence of SemCom and BitCom,\nyielding the semantic-bit coexisting system that\nsupports both SemCom users (sem-users) and BitCom users (bit-users).\nIn the coexisting system, due to the diverse performance objectives, existing transmission schemes for BitCom can no longer provide satisfactory services for the sem-users, and thus need to be redesigned. In response to this, we investigate the beamforming design for the coexisting multi-user multiple-input single-output (MU-MISO) system, and try to shed lights on how to adapt the current transmission algorithms in BitCom to the coexisting system."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "1.3",
|
| 25 |
+
"parent_section_id": "1",
|
| 26 |
+
"section_name": "Related works",
|
| 27 |
+
"text": "The study of multiuser SemCom has received a lot of attention in recent years,\nwhich mainly lies in two directions: resource allocation and interference management. In terms of resource allocation, a semantic-aware channel assignment mechanism was proposed in [23 ###reference_b23###], and an optimal semantic-oriented resource block allocation method was put forward in [24 ###reference_b24###] subsequently. The main idea of these two works is adjusting communication resources for boosting the transmission of semantic information.\nOn the other hand, multiuser usually accompanies with interference, which can cause semantic noise that significantly degrades the performance [25 ###reference_b25###]. To mitigate the interference, several methods have been proposed. For instance, the authors in [26 ###reference_b26###] proposed to jointly optimize the codebook and the decoder, as such the user interference could be minimized. The authors in [27 ###reference_b27###] further proposed to dynamically fuse the semantic features to a joint latent representation and adjust the weights of different user semantic features to combat fading channels. In addition to the interference from other sem-users,\nthe interference from bit-users needs to be appropriately mitigated as well. Given this, the coexistence of sem-users and bit-users was considered in the non-orthogonal multiple access (NOMA) system [28 ###reference_b28###, 29 ###reference_b29###], where bit-users and sem-users are viewed as primary users and secondary users, respectively. The interference issue was addressed through successive interference cancellation (SIC).\nHowever, in the case of MU-MISO that enables multi-users through spatial multiplexing,\nthe coordination of the two types of users with diverse transmission objectives remains largely unexplored.\nBeamforming is a key technique in MU-MISO systems and has been commonly-used for interference mitigation [30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###].\nSeveral linear design methods have been proposed to tackle the beamforming problem in MU-MISO systems. Zero forcing (ZF) and maximum ratio transmission (MRT) algorithm are two simple but effective beamforming algorithms. The former minimizes the user interference, and the latter maximizes the signal gain at the destination user.\nBesides, a well-known iterative algorithm is the weighted minimum mean-square error (WMMSE) algorithm [30 ###reference_b30###],\nwhich achieves high performance by\nfirst transforming the original problem into an MMSE problem and then updating the variables in an alternative manner. Given high complexity of the WMMSE algorithm and limited performance of the ZF and the MRT algorithm, researchers resort to deep learning for developing beamforming scheme with both low complexity and high performance.\nWith the optimal solution structure revealed in [31 ###reference_b31###], the data driven method that learns the undetermined parameters in the solution structure was proposed in [32 ###reference_b32###], and further extended in [33 ###reference_b33###]. Besides, the authors in [37 ###reference_b37###] proposed to use deep unfolding of the WMMSE algorithm for MU-MISO downlink precoding, which constructs the iteration process in neural networks. Variants of the deep unfolding-based methods have been investigated in [34 ###reference_b34###].\nThe aforementioned schemes aim to maximize the data rate for BitCom. However, recent research has revealed that the semantic rate in SemCom has a distinct mapping from SNR to performance [23 ###reference_b23###, 28 ###reference_b28###]. As a result, existing methods may not be suitable for the semantic-bit coexisting system, and a new beamforming scheme that takes into account the different transmission objectives is urgently needed."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "1.4",
|
| 31 |
+
"parent_section_id": "1",
|
| 32 |
+
"section_name": "Contribution and Organization",
|
| 33 |
+
"text": "In this paper, we investigate the transmission design for the semantic-bit coexisting paradigm in the multiple-antenna communication system.\nSpecifically, we consider sem-users with the task of image transmission\nand propose an adaptive JSCC autoencoder for semantic information extraction and recovery.\nRecognizing the primary challenge lies in dealing with an intractable semantic rate function, we first perform data regression to model the semantic rate, yielding a complex transcendental function. Then a beamforming problem that optimizes the performance of sem-users under the quality-of-service (QoS) constraints of bit-users is formulated and solved.\nThe contributions of this paper are summarized as follows:\n###figure_1### Targeting the task of image transmission, we propose an effective JSCC scheme that features a dynamic depth of downsampling, which is realized through the \u201cearly exit\u201d mechanism [17 ###reference_b17###] and the proposed module-by-module training scheme. On this basis, we further conduct semantic rate approximation on the ImageNet dataset and build the mapping from the depth of downsampling and SNR to semantic rate through data regression.\nWe propose a beamforming design scheme for the semantic-bit coexisting system. Specifically, we tackle the primary challenge posed by the transcendental semantic rate function.\nBy employing majorization-minimization (MM) and introducing a novel surrogate function, the original objective is transformed into a multiple-ratio form, which is further converted to an inhomogeneous quadratically constrained quadratic programs (QCQP) problem by fractional programming. The semi-closed form solution for the resulting QCQP problem is derived, and the original problem is solved in an alternative manner. Additionally, the alternative algorithm has inspired a low-complexity beamforming method to address the complexity concern.\nBoth theoretical analysis and numerical simulations are presented to validate the effectiveness of the proposed beamforming scheme in semantic-bit coexisting communication systems.\nThe rest of this paper is organized as follows. Section II ###reference_### introduces the semantic-bit coexisting system model.\nSection III ###reference_### presents the proposed JSCC design, the approximation of semantic rate, and the problem formulation.\nThe optimization problem is solved in Section IV ###reference_###.\nThen, extensive simulation results are given in Section V ###reference_###, followed by the concluding remarks in Section VI ###reference_###."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "II System Model",
|
| 39 |
+
"text": "In this section, we first present the semantic-bit coexisting system and the transmission protocol,\nbased on which the performance metrics of sem-users and bit-users are analyzed, respectively."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.1",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "II-A Semantic-bit Coexisting Communication Framework",
|
| 45 |
+
"text": "We consider a single-cell downlink MU-MISO system shown in Fig. 1 ###reference_###. The base station (BS) is equipped with transmit antennas, while the users have a single antenna each. The users are divided into two groups, namely bit-users with BitCom and sem-users with SemCom. We denote the bit-users set as and the sem-users set as , with and being the numbers of bit-users and sem-users, respectively.\nThe transmit signal vector at the BS, denoted by , is given by\nwhere and denote the beamforming vector of the -th bit-user and the -th sem-user, respectively.\nFurthermore, we assume that and are zero mean and , and the symbols desired for different users are independent from each other.\nThen, the received signal at bit-user can be expressed as\nwhere denotes the MISO channel from the BS to user and represents the additive noise which is modeled as a circularly symmetric complex Gaussian random variable following the distribution , with being the average noise power.\nSimilarly, let denote the channel from the BS to the sem-user , the received signal at sem-user is given by\nwhere is the additive white Gaussian noise with distribution . Notice that denotes the symbol stream that contains symbols in the latent representation, which is the output of the JSCC encoder. The symbols within the latent representation are transmitted sequentially.\n###figure_2###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.2",
|
| 49 |
+
"parent_section_id": "2",
|
| 50 |
+
"section_name": "II-B Bit-level Communication",
|
| 51 |
+
"text": "We adopt a transmission frame structure consisting of\n symbol intervals, as shown in Fig. 2. We assume slow fading channel, which means that the channel does not change within a frame and independently fades across different frames. In this vein, the first symbols are utilized for channel estimation and the remaining symbols for data transmission.\nWith the estimated CSI, the BS is able to conduct beamforming.\n It is worth noting that for sem-users, the goal of transmission is to convey the latent representation from the transmitter to the receiver.\nBased on Fig. 10 in [38 ###reference_b38###] and our own observation, the performance for sem-users does not improve significantly beyond a certain threshold of the number of transmitted symbols. Drawing inspiration from semi-NOMA principle outlined in [39 ###reference_b39###], we assume that sem-users complete transmission of the latent representation within symbol intervals (), while bit-users utilize all symbol intervals for data transmission.\n In this context, the total data transmission component of length symbol intervals is further divided into two parts, as shown in Fig. 2 ###reference_###. At the shared period of length symbol intervals, the BS simultaneously serves all users. Both bit-users and sem-users will be interfered by each other.\nThe exclusive period of length symbol intervals is dedicated to data transmission for bit-users, i.e., .\n111Note that while we primarily focus on the scenario where the number of transmission symbols used by bit-users exceeds that of sem-users, it should be emphasized that our framework and proposed method can be readily extended to situations where the number of transmission symbols used by sem-users is greater than that of bit-users. This extension can be achieved by\nrevising the overall bit rate of the bit-users in (6 ###reference_###) to .\n \nAs digital transmission is employed at the bit-user, the achievable bit rate (bits/s/Hz) during the shared period is given by [1 ###reference_b1###]\nwhere denotes the signal-to-interference-plus-noise ratio (SINR) of bit-user during the shared period.\nThen, at the exclusive period, the BS only serves the bit-users,\nand the corresponding achievable bit rate is given by\nwhere .\nAs a result, the overall normalized bit rate of the bit-user in a frame is defined as below."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "2.3",
|
| 55 |
+
"parent_section_id": "2",
|
| 56 |
+
"section_name": "II-C Semantic Communication",
|
| 57 |
+
"text": "In semantic communication, the semantic rate no longer focuses on the symbol error rate, but on the quality of task completion.\nFundamentally, the performance of semantic communication hinges on the effectiveness of the JSCC model and the wireless noise intensity.\nIn this sense,\nthe semantic rate can be generally expressed as\nwhere denotes the semantic model composed of deep neural networks (DNNs) that determines and the specific method for the extraction of semantic information, and is the SINR of the sem-user at the shared period.\nIn the context of image transmission scenario, is evaluated under the widely-adopted performance metric called structural similarity index measure (SSIM).\nAs mentioned, the overall semantic rate is determined by the adopted JSCC model and the transmission environment.\nThe former represents the semantic compression and exploitation ability of the semantic communication, while the later determines the level of noise disturbation.\nHowever, semantic communication highly relies on neural networks for semantic extraction and recovery, the black-box nature of which hinders the theoretical analysis, making unable to be acquired precisely.\nA commonly adopted method for tackling this problem is data regression [28 ###reference_b28###, 39 ###reference_b39###, 23 ###reference_b23###], which obtains the mapping from and to through sufficient experimental instances and curve fitting,\nwhich will be further elaborated in the subsequent section."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "III JSCC Design and Semantic Rate Approximation",
|
| 63 |
+
"text": "In this section, we will elaborate on the design of semantic communication in detail.\nConsidering the task of image transmission,we first present the proposed design of the JSCC model.\nThen, we conduct a series of experiments to evaluate the performance in different system settings.\nBuilding on this, we approximate the semantic rate with data regression.\nFinally, the problem that jointly optimizes the beamforming vectors and the downsampling depth is formulated."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.1",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "III-A JSCC Network Characterization",
|
| 69 |
+
"text": "###figure_3### The proposed JSCC network for image transmission is shown in Fig. 3 ###reference_###.\nAt the encoder part, we consider compressing the original image through multiple downsample modules ,\neach of which comprises a residual block [40 ###reference_b40###], followed by a convolution layer.\nThe number of filters in all the convolution layers is set to .\nAfter each downsample module, the image size is reduced by half, and the number of channels is fixed to . For the upsample module at the decoder part, the reverse process is conducted.\nWithout loss of generality,\nwe consider the image having a square size, that is, , with being the image size,\nand \u201c3\u201d the number of channels,\nrespectively.\nAdditionally, in typical 5G and beyond communication systems, due to the rapid growth in the number of users and data volume,\ncareful resource allocation is needed at the BS side.\n As a result, the available communication resources for users vary considerably in time and space, which poses new requirements for semantic communication. Specifically, the JSCC model should be able to dynamically adjust the number of the transmission symbols.\nTo this end, we propose a multi-exit mechanism, as illustrated in Fig. 3.\nWith this mechanism, the decoder can exit early rather than pass all the downsample modules. Consequently, the size of the latent representation can be adjusted by selecting the number of passed downsample module. Let be the number of passed downsample module, then the number of required transmission symbols is given by"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.2",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "III-B Semantic Rate Approximation",
|
| 75 |
+
"text": "Before deployment,\ntraining is required to obtain the JSCC neural network .\nWith the multi-exit mechanism, it is desired that the downsample and upsample modules in can work independently, and also be incorporated into deeper models (i.e., with a larger ).\nTo this end, we propose a module-by-module training algorithm. As shown in Algorithm 1 ###reference_###, the modules are trained sequentially, with only the weight parameters in the current layer modules being updated during the -th round of training, while the upper layer modules (i.e., , ) are frozen222\u201cFrozen\u201d means that the parameters of the module will not change anymore..\nLet be the JSCC model with a specific downsampling depth . The semantic rate defined in (7 ###reference_###) is given by , where denotes the SNR.\nFor simplicity, we use the notation for in the rest of this paper, since is uniquely determined by when the downsampling and upsampling modules are specified.\nWe train on the ImageNet dataset [41 ###reference_b41###], a large-scale image dataset containing over 14 million images, which serves as a standard benchmark for various computer vision tasks.\nWe consider the additive white Gaussian noise (AWGN) channel333Note that we assume AWGN channel for simplicity, such that the JSCC model can be trained on the BS side, and the training overhead can thus be significantly reduced. Nevertheless, as we discussed in the previous work [42 ###reference_b42###], the model trained under AWGN cases can be directly applied to the MISO cases with minor modification. This is because the final received signal can be transformed into an equivalent AWGN form when recovery precoding is adopted at the receiver., where the SNR is fixed to 10 dB in the training process. Moreover, mean squared error (MSE) criteria is adopted as the loss function, and the Adam optimizer with the initiate learning rate of is adopted.\nAfter sufficient training, we evaluate the performance on the validation dataset with different and SNR settings, under SSIM. The evaluation results are depicted in Fig. 4 ###reference_###.\n It can be seen that with an increasing , i.e., more stringent compression of the original image, the performance floor when decreases monotonically.\n\nBesides, for each , follows an shape with respect to in dB, which is also revealed in [28 ###reference_b28###, 43 ###reference_b43###, 44 ###reference_b44###] under the text transmission task with the DeepSC model [10 ###reference_b10###]. Therefore, similar to [28 ###reference_b28###], the generalized logistic function could be utilized to well approximate , as follows.\nwhere , , , are parameters determined by , and are obtained through curve fitting.\n###figure_4###"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.3",
|
| 79 |
+
"parent_section_id": "3",
|
| 80 |
+
"section_name": "III-C Problem Formulation",
|
| 81 |
+
"text": "As illustrated in Section II-A ###reference_###,\nthere are two types of users with different performance metrics in the sematic-bit coexisting communication system. In this paper, we aim to maximize the semantic rate of sem-users while satisfying the QoS requirements of bit-users,\nby jointly optimizing the beamforming vectors and the downsampling depth.\nThe considered optimization problem can be formulated as\nwhere , , , and , with and . denotes the requirements of transmission rate in one frame from bit-user . denotes the transmit power budget of the BS. and denote the transmission rate defined in (4 ###reference_###) and (5 ###reference_###), respectively.\n is the approximated semantic rate given by (III-B ###reference_###).\nAs shown in , beamforming design for a semantic-bit coexisting system faces some new challenges compared to a BitCom system.\nFirstly, the semantic rate admits a completely different form (which is neither convex nor concave) from channel capacity w.r.t. SINR, which renders the existing interference suppression algorithms ineffective.\nAdditionally, the performance of sem-users also partially depends on the downsampling depth that requires careful design. Unfortunately, there exists a strong coupling between beamforming design and , making the problem even more challenging to solve."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "IV Joint Optimization of Beamforming and -Configuring for Coexisting System",
|
| 87 |
+
"text": "In this section, we solve the problem for beamforming design and configuring in semantic-bit coexisting MU-MISO systems.\nAs discussed in Remark 1 ###reference_ark1###, it is hard to directly solve the joint optimization problem, and we thus consider solving by optimizing and alternatively."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.1",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "IV-A Beamforming Design",
|
| 93 |
+
"text": "In this subsection, we optimize the beamforming matrix in with a given , and the subproblem is given below.\nAs shown in , the objective function (11a ###reference_.1###) is non-convex as (11a ###reference_.1###) is a transcendental function of .\nMoreover, the fractional expression exists in the QoS constraints.\nTherefore, is a NP-hard problem, indicating that the optimal solution is intractable.\nWe thus resort to a suboptimal solution.\nTo this end, the problem-solving process is mainly divided into four steps.\nFirstly,\nwe relax the power constraint by regulating the noise intensity with .\nThen, we propose a surrogate function for approximating the objective function. Next, the transforming method proposed in [45 ###reference_b45###] is adopted to transform the multiple-ratio fractional programming (FP) problem into a QCQP problem.\nFinally, the resulting QCQP problem is solved in a low-complexity manner."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.1.1",
|
| 97 |
+
"parent_section_id": "4.1",
|
| 98 |
+
"section_name": "IV-A1 Problem Transformation",
|
| 99 |
+
"text": "It can be observed that the beamforming vectors appear in as\n the form of SINR in both the objective function (11a ###reference_.1###) and the QoS constraints (10b ###reference_.2###).\nWithout loss of optimality, similar to [34 ###reference_b34###, 46 ###reference_b46###], the power constraint can be removed by integrating it to the SINR terms, as follows.\nwhere the equivalent SINR terms are given by\n\nLet and denote the optimal solutions of problems and , respectively.\nBy observing , , and in (13 ###reference_###), (14 ###reference_###), and (15 ###reference_###),\nit can be inferred that should also be an optimal solution for problem , where is a scaling factor.\nWhen serves as a power normalization scalar, i.e., , achieves the maximum value of the objective function in problem , as maximizes the objective function of problem .\nMoreover, it is straightforward to validate that also satisfies (10b ###reference_.2###), (10c ###reference_.3###).\nTherefore, we can conclude that .\nMore importantly, this allows us to solve problem by first solving problem and then applying power normalization to the solution."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.1.2",
|
| 103 |
+
"parent_section_id": "4.1",
|
| 104 |
+
"section_name": "IV-A2 Objective Approximation",
|
| 105 |
+
"text": "Observing problem , it can be found that the objective function (12a ###reference_.1###) presents a complex transcendental form that can not be directly tackled. To address it,\nObserving problem , it can be found that the objective function (12a) presents a complex transcendental form that can not be directly tackled. To address it,\nwe first employ the MM algorithm [47 ###reference_b47###] to solve the problem in an alternative manner and then use a surrogate function to approximate the objective function.\nFirstly, the semantic rate in is given by\n###figure_5### ###figure_6### The exemplary functions with different settings are depicted in Fig. 5 ###reference_###.\nNote that, our objective is to maximize the semantic rate w.r.t. the precoding matrix rather than the SINR term .\nTherefore, it is necessary to find an appropriate surrogate function that can accurately capture the shape of and also has a simple form for ease of handling during the optimization of .\n To this end,\nwe propose the following surrogate function to approximate at a given station point .\nA lower bound on the semantic rate function is given by\nwhere the equality holds only when .\n,\nand is given as follows.\nProof:\nTo establish Proposition 1 ###reference_position1###, we only need to prove .\nFor , is a concave function for . Let , then we have , the equality holds if and only if . By sorting this result, we can conclude that Proposition 1 ###reference_position1### holds for .\nFor , is a convex function for . Let , then we have , the equality holds if and only if . By sorting this result, we can conclude that Proposition 1 ###reference_position1### holds for .\nIn a nutshell, Proposition 1 ###reference_position1### holds for any , which ends the proof.\nPrompted by Proposition 1 ###reference_position1###,\nwe use as the surrogate function for .\nAs shown in Fig. 5 ###reference_###, the proposed surrogate function captured the original well.\nWe further take in (13 ###reference_###) into , which yields and\nleads to the following optimization problem:\nIt can be found that\n exhibits a fractional form of . Therefore, we resort as a function of beamforming vectors, as shown in (16 ###reference_###) on the top of this page, where , , , and when ;\n, , , and when .\nBy approximating with the proposed surrogate function in (18 ###reference_###),\nthe problem is transformed into a multiple-ratio FP problem.\nNote that similar approximation method can be easily adopted for other transmission problems such as resource allocation by applying the proposed surrogate function in (19 ###reference_###).\nMoreover, this method can also be applied to multi-user multiple input multiple output (MU-MIMO) scenarios.\nWith the approximation method, the MU-MIMO beamforming problem can be reformulated as a fractional programming problem.\nThen, we can either transform the MU-MIMO problem to MU-MISO problem using the method in [48 ###reference_b48###] or directly solve the fractional programming problem with matrix variables to be optimized."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.1.3",
|
| 109 |
+
"parent_section_id": "4.1",
|
| 110 |
+
"section_name": "IV-A3 Fractional Programming",
|
| 111 |
+
"text": "In this part, we solve the multiple-ratio FP problem .\nNote that, still cannot be directly solved,\nbecause the QoS constraints are non-convex. Besides, in terms of the beamforming vector,\nboth the objective and the constraints (12b ###reference_.2###) are in fractional form.\nTo this end, the alternating optimization method is considered.\nWe first apply the Lagrangian dual transformation proposed in [45 ###reference_b45###] to (12b ###reference_.2###), and the problem can be equivalently written as\nwhere , are auxiliary variables for the SINR terms,\nNote that, for maximizing and with a fixed , the optimal and equal to the corresponding SINR term of user , as follows.\nMoreover, and hold for optimal and , respectively. The same properties holds for and as well.\nTherefore, with the optimal and , the problem can be reduced to\nIt can be found that in , the sum-of-ratio form exists in both the objective and the QoS constraints.\nTherefore, we adopt the quadratic transformation proposed in [45 ###reference_b45###], which yields the following optimization problem.\nwhere and are given by\nIn , three auxiliary vectors , , and are introduced to transform the original problem to a quadratic programming problem.\nMore specifically, with a given\n,\nthe optimal , , and for are given as follows.\nThen, in terms of fixed , , and , the problem can be further reduced to\nIt can be observed that is actually an inhomogeneous and separable QCQP problem,\nwhich can be solved by convex optimization toolboxes like CVX in MATLAB. However, the complexity of CVX is still unbearable since needs to be solved in each iteration.\nTherefore,\nwe derive a semi-closed form solution for and propose a computationally efficient fixed point algorithm to search for the Lagrangian multipliers.\nFormally,\nthe Lagrangian of is given by\nwhere is the vector composed of multiple non-negative Lagrange multipliers.\nBy taking the first-order derivative of over the precoding vectors (i.e., , ) and setting it to zero,\nwe have the following proposition.\nFor the MU-MISO system with channel , the optimal solution of the problem is given by\nwhere , .\n(Optimal Beamforming Structure)\nAs shown in Proposition 1, it can be observed that the precoding vectors of both bit-users and sem-users are linear transformations of their corresponding channel vectors, where the weight coefficients is divided into linear power allocation coefficients (i.e., and in (35 ###reference_###) and (37 ###reference_###)) and priority coefficients for interference suppression (i.e., , , in and ). Comparing (35 ###reference_###) and (37 ###reference_###), we can find that and have different weight coefficients, indicating that sem-users have different resource allocation strategies compared to traditional digital communication due to their different objective functions. Furthermore, according to Proposition 1, the optimal precoding vector of is only determined by , so we can turn to find the optimal for solving , thereby reducing computational complexity.\n(Computation Complexity Analysis)\nFor the calculation of , it can be observed that the inverse matrix is shared among bit-users, thus only needs to be calculated once. The complexity for calculating is given by .\nFor the calculation of , according to the Sherman\u2013Morrison formula, we have , where is defined in (39 ###reference_###), and .\nTherefore, the complexity for calculating is given by .\nIn a nutshell, the overall complexity for calculating the beamforming vectors by (35 ###reference_###) and (37 ###reference_###) is .\n\nAs discussed in Remark 2 ###reference_ark2###, to obtain the optimal solution of , only the Lagrange multipliers needs to be determined, where denotes the optimal dual variable.\n In addition, it can be found that\nthe optimal solution of should satisfy the QoS constraints with equality.\n As a result, we can obtain by the fixed-point algorithm.\nThe update rule is presented in (40 ###reference_###).\nThe solution process for is concluded in Algorithm 2 ###reference_###.\nSo far, the problem has been solved in an alternating manner, which is summarized in Algorithm 3 ###reference_### and termed as majorization-minimization fractional programming (MM-FP).\nThere are two groups of introduced variables, namely SINR related terms (, , ) and the fraction related terms (, , ).\nAs shown in (30 ###reference_###)-(32 ###reference_###), the two groups are updated in sequence, followed by the update of .\nAlgorithm 3 ###reference_### requires multiple iterations, each of which is divided into three steps. The first step includes the update of SINR related terms, with a complexity of ; the second step includes the updates of the ratio related terms, with a complexity of ; the third step includes the updates of precoding matrix, with a complexity of , where denotes the the number of iteration rounds for the fixed point method.\nTherefore, the complexity of Algorithm 3 ###reference_### is\nwhere denotes the number of iterations.\nTo address concerns about the high computational complexity introduced by multiple iterations, we also propose a low-complexity beamforming method to enhance practicality.\nGenerally, beamforming involves determining the precoding direction and allocating power. As shown in (34) and (36), the precoding direction is determined by the channel vectors, , , , and . Given this, we consider setting a uniform for all bit-user for avoiding iteration, and approximating the value of , , by (29), (30), (31), where the precoding vectors are replaced by the corresponding channel vector. Consequently, the beamforming direction, denoted by , is established. Then we allocate the power by solving the following problem.\nwhere and denote the power allocation vectors of sem-users and bit-users, respectively.\nThe low-complexity majorization-minimization fractional programming (LP-MM-FP) algorithm is concluded in Algorithm 4 ###reference_###.\nThe first step includes the calculation of ratio and SINR related terms, with a complexity of ; the second step includes determining beamforming direction, with a complexity of ; the third step is conducted for power allocation, with a complexity of . The overall complexity of Algorithm 4 ###reference_### is given by\nComparing (43 ###reference_###) with (41 ###reference_###), the number of users is generally much smaller than the number of antennas.\nThis suggests that Algorithm 4 ###reference_### has a lower computational complexity compared to Algorithm 3 ###reference_###."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.2",
|
| 115 |
+
"parent_section_id": "4",
|
| 116 |
+
"section_name": "IV-B Overall Algorithm",
|
| 117 |
+
"text": "###figure_7### ###figure_8### In this subsection, we shall present the proposed method for solving the problem . Firstly, the subproblem that optimizes the beamforming vector with a fixed has been tackled in a computation-efficient manner.\nSubsequently, recognizing that the feasible set for (i.e., the downsampling depth) is typically constrained within a narrow integer range, we employ the exhaustive algorithm to identify the optimal .\nThe overall algorithm is presented in Algorithm 5 ###reference_###.\nIt can be found that executing Algorithm 5 ###reference_### requires at most times the complexity of Algorithm 3 ###reference_###, where and represent the maximum and minimum feasible values of , respectively.\nFor conducting Algorithm 3 ###reference_###, it requires multiple iterations, each of which is divided into three steps. The first step includes the update of SINR related terms, with a complexity of ; the second step includes the updates of the ratio related terms, with a complexity of ; the third step includes the updates of precoding matrix, with a complexity of , where denotes the the number of iteration rounds for the fixed point method. Therefore, the complexity for conducting Algorithm 3 ###reference_### is , where denotes the iteration number of Algorithm 3 ###reference_###.\nIn conclusion, the complexity for conducting Algorithm 5 ###reference_### is given by ."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "5",
|
| 121 |
+
"parent_section_id": null,
|
| 122 |
+
"section_name": "Numerical Results",
|
| 123 |
+
"text": ""
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5.1",
|
| 127 |
+
"parent_section_id": "5",
|
| 128 |
+
"section_name": "Simulation Setup",
|
| 129 |
+
"text": "System setup. We consider the clustered Saleh-Valenzuela channel model [49 ###reference_b49###], in which the channel from the BS to a specific user is given as follows.\nwhere is the number of paths and is the channel attenuation of the -th path. Without loss of generality, we set to .\n denotes the azimuth angle of departure (AoD) at the transmitter, and we assume follows a uniform distribution from to .\nThe response vector of Uniform Linear Array (ULA) at the BS side can be expressed as\nFor the MISO system setting, unless specified, the following system parameters will be used as the default setting in the experiments: , , , , dB, .444It is worth noting that the proposed method should be also adaptable to other configurations.\nBesides, as mentioned in Section III-B ###reference_###, the ImageNet dataset and SSIM are used as the training dataset and performance metric, respectively.\nThe image size is set to .\nThe length of the frame is set to , and the number of filters is set to .The feasible set of downsampling depth is given by .\nUsing (8 ###reference_###), the corresponding feasible set of is given by .\nWe also conduct performance evaluation on the Kodak image dataset555https://r0k.us/graphics/kodak/ ###reference_r0k.us/graphics/kodak/###,\nwhich comprises of 24 high-quality images.\nBenchmark schemes.\nWe compare the proposed beamforming algorithm, MM-FP and LP-MM-FP, with three commonly-adopted beamforming schemes, including the zero focing (ZF) algorithm, maximum ratio transmission (MRT) algorithm, and weighted minimum mean-square error (WMMSE) algorithm.\nNote that the aforementioned algorithms cannot be directly used for solving , as the QoS constraints may not be satisfied.\nGiven this, we first obtain the beamforming direction through these algorithms, i.e., .\nThen we reallocate the power by solving the problem , and the final beamforming vector is given by .\nThe resulting benchmark schemes are named ZF-PC, MRT-PC, and WMMSE-PC respectively.\n###figure_9### ###figure_10### ###figure_11###"
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "5.2",
|
| 133 |
+
"parent_section_id": "5",
|
| 134 |
+
"section_name": "Evaluation of Coexisting System",
|
| 135 |
+
"text": "In this subsection, we evaluate the effectiveness of the semantic-bit coexisting system by comparing it with the BitCom system. The semantic-bit coexisting system utilizes JSCC with neural network for image transmission. As a benchmark, we consider the BitCom scheme, where BS employs the standard separate source and channel scheme to transmit images to sem-users. Specifically, BPG is used for source coding with a compression quality set to . For channel coding, we adopt the Turbo codes following the LTE standard [50 ###reference_b50###], with a coding rate of and a block length of . For modulation, we utilize the QAM scheme along with a soft demodulation process. The two different systems result in two sets of , and we conduct Algorithm 3 ###reference_### for beamforming under the two parameter sets.\nThe results are presented in Fig. 6 ###reference_###, where Fig. 6(a) ###reference_sf1### shows the performance in low SNR case (SNR = dB), and Fig. 6(b) ###reference_sf2### in high SNR case (SNR = dB).\nIn the low SNR case, the BitCom system fails to work under any of the examined QoS requirements.\n This is because BitCom is sensitive to noise. In the high SNR case, with a low QoS requirement, sem-users in the BitCom system enjoy low interference from bit-users, and thanks to the channel coding, the receiver is able to perfectly decode the BPG bit flow and attains good performance when . With the increase of QoS requirements, the strong interference causes the performance of BitCom to degrade quickly.\nIn the meantime,\nthe semantic-bit coexisting system only experiences a slight performance degradation as the QoS requirement increases from 0 to 1.5,\nthus demonstrating its effectiveness.\nSince data driven method is used to approximate the semantic rate, it is important to compare the real semantic rate with the approximated one.\nWe first implement the proposed beamforming scheme in Algorithm 3 ###reference_### and then compare the image recovery quality (i.e., SSIM) and the objective value in . The performance comparison is presented in Fig. 7 ###reference_###, where three different settings are considered.\nThe final performance is averaged over test samples.\nAs shown in Fig. 7 ###reference_###, the approximation and simulation curves almost overlap, and the approximating-based method can well capture the performance growth trend as SNR increases.\nThis validates the effectiveness of the data driven method in accurately approximating the semantic rate.\n###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34### ###figure_35###"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "5.3",
|
| 139 |
+
"parent_section_id": "5",
|
| 140 |
+
"section_name": "Performance of Beamforming Design",
|
| 141 |
+
"text": "In this subsection, we compare the performance of the proposed beamforming algorithm with three benchmark schemes.\nFig. 8 ###reference_### depicts the semantic performance using different beamforming schemes in the coexisting system.\nWe evaluate performance across different SNR and QoS settings.\nUnder different QoS settings, Fig. 8(a) ###reference_sf1### shows that heuristic beamforming schemes, such as ZF-PC and MRT-PC, perform poorly since they fail to coordinate beamforming direction and power for the problem . The optimization-induced method WMMSE-PC achieves better performance than ZF-PC and MRT-PC. However, WMMSE-PC fails to consider the semantic objective in Fig. 4 ###reference_### and (III-B ###reference_###), which has a different mapping relationship between SNR and performance. As a result, the performance of WMMSE-PC degrades significantly when QoS requirements increase,\nwhich implies that tailored design of beamforming for coexisting systems is required. \nThe proposed beamforming algorithm outperforms the three benchmark schemes in all QoS settings, achieving the best performance given all the examined QoS requirements.\nMoreover, the proposed LP-MM-FP algorithm achieves near performance with MM-FP algorithm especially in low QoS regime, while with much lower computational complexity.\nWe also present some test examples in Fig. 9 ###reference_###, where is set to 0.8. The recovered image from the system that adopts the ZF-PC or MRT-PC beamforming schemes has an obvious blur, which is also reflected in SSIM performance. The system with the WMMSE-PC algorithm has relatively more noise points in the first and third image. The system with the proposed beamforming scheme recovers the first and second image clearly, with some blurs in the third image, yet still outperforms the other three benchmark schemes in terms of SSIM.\nFig. 8(b) ###reference_sf2### illustrates the performance comparison across different SNR settings. ZF-PC performs poorly in the low SNR regime, although it can approach the performance upper bound like WMMSE-PC and the proposed method when dB. MRT-PC performs relatively well in the low SNR regime, but the performance quickly degrades compared with other schemes as SNR increases since it does not consider user interference for beamforming design. Similarly, the proposed scheme outperforms these benchmark schemes in all the examined SNR settings, particularly in the low SNR regime, demonstrating its robustness. We present some test examples in Fig. 10 ###reference_###, and the recovery performance is consistent with the analytical results in Fig. 8(b) ###reference_sf2###. The system that adopts the proposed beamforming scheme achieves the best SSIM performance in all three recovered images."
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "5.4",
|
| 145 |
+
"parent_section_id": "5",
|
| 146 |
+
"section_name": "Complexity Comparison of Beamforming Algorithms",
|
| 147 |
+
"text": "In this subsection, we evaluate the CPU execution time of various beamforming schemes on Intel I9-9900K CPU. As shown in Table I ###reference_###, the proposed MM-FP algorithm has the highest computation time due to the iterative nature. By eliminating the need for iterative optimization, the proposed LP-MM-FP algorithm significantly reduces computation time. Specifically, LP-MM-FP has a similar CPU runtime to the MRT and ZF algorithms and is faster than the WMMSE algorithm. This indicates that the LP-MM-FP algorithm offers a complexity comparable to low-complexity methods like MRT and ZF while delivering competitive performance with the WMMSE algorithm, underscoring its practicality."
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"section_id": "5.5",
|
| 151 |
+
"parent_section_id": "5",
|
| 152 |
+
"section_name": "Evaluation of Configuring Strategies",
|
| 153 |
+
"text": "###figure_36### This subsection evaluates the effectiveness of different methods for the configuration of in\na typical loaded scenario, i.e., .\nWe compare the exhaustive search against two benchmark schemes: Random, which randomly selects a value of from to ; the best fixed setting, which we found to be the minimum value based on numerical experiments.We present the performance comparison in Fig. 11 ###reference_###, where we use to denote the gap between the currently selected QoS value and the maximum achievable QoS, and a smaller indicates a more stringent QoS requirement. We observe that the performance of all schemes improves as increases, with the Random algorithm performing noticeably worse than the other three schemes.\nThe fixed algorithm has already achives satisfactory performance in the transmission task considered in this paper, which can be adopted in the scenarios with limited computation capability.\n\nThe considered method optimizes through the exhaustive method and can achieves the best performance."
|
| 154 |
+
},
|
| 155 |
+
{
|
| 156 |
+
"section_id": "6",
|
| 157 |
+
"parent_section_id": null,
|
| 158 |
+
"section_name": "VI Conclusion",
|
| 159 |
+
"text": "In this paper,\nwe considered a semantic-user and bit-user coexisting system.\nA beamforming problem that maximizes the semantic rate under QoS constraints from bit-users and power constraint was formulated and solved in an low-complexity manner.\nExperiments show that the proposed method significantly improves the existing beamforming methods dedicated for BitCom.\n Addressing issues beyond beamforming in the coexisting system remains an interesting future direction."
|
| 160 |
+
}
|
| 161 |
+
],
|
| 162 |
+
"appendix": [],
|
| 163 |
+
"tables": {
|
| 164 |
+
"1": {
|
| 165 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.1.1.1.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.1.1.2.1\">#of QoS and</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.1.1.1.1\">SNR ()</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.1.1.2.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.2.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.2.1.1.1\">MRT</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.2.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.2.1.2.1\">-PC/ms</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.3\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.1.1.3.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.3.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.3.1.1.1\">ZF-PC</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.3.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.3.1.2.1\">/ms</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.4\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.1.1.4.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.4.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.4.1.1.1\">WMMSE</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.4.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.4.1.2.1\">-PC/ms</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.1.1.5.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.5.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.5.1.1.1\">MM-FP</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.5.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.5.1.2.1\">/ms</td>\n</tr>\n</table>\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.6\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.1.1.6.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.6.1.1\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.6.1.1.1\">LP-MM</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.6.1.2\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_left\" id=\"S5.T1.1.1.6.1.2.1\">-FP/ms</td>\n</tr>\n</table>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.1\">( dB)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.2\">36.5</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.3\">18.1</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.4\">42.7</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.5\">82.3</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.6\">25.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.1\">( dB)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.2\">40.4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.3\">17.5</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.4\">40.4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.5\">133.7</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.6\">36.3</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table I: </span><span class=\"ltx_text\" id=\"S5.T1.5.1\" style=\"color:#0000FF;\">The CPU Running Time of Beamforming Algorithms</span></figcaption>\n</figure>",
|
| 166 |
+
"capture": "Table I: The CPU Running Time of Beamforming Algorithms"
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
"image_paths": {
|
| 170 |
+
"1": {
|
| 171 |
+
"figure_path": "2403.11693v3_figure_1.png",
|
| 172 |
+
"caption": "Figure 1: Semantic-bit coexisting communication system framework",
|
| 173 |
+
"url": "http://arxiv.org/html/2403.11693v3/x1.png"
|
| 174 |
+
},
|
| 175 |
+
"2": {
|
| 176 |
+
"figure_path": "2403.11693v3_figure_2.png",
|
| 177 |
+
"caption": "Figure 2: Transmission protocol for bit-users and sem-users",
|
| 178 |
+
"url": "http://arxiv.org/html/2403.11693v3/x2.png"
|
| 179 |
+
},
|
| 180 |
+
"3": {
|
| 181 |
+
"figure_path": "2403.11693v3_figure_3.png",
|
| 182 |
+
"caption": "Figure 3: JSCC network for image transmission with multi exit mechanism",
|
| 183 |
+
"url": "http://arxiv.org/html/2403.11693v3/x3.png"
|
| 184 |
+
},
|
| 185 |
+
"4": {
|
| 186 |
+
"figure_path": "2403.11693v3_figure_4.png",
|
| 187 |
+
"caption": "Figure 4: Performance evaluation under different K\ud835\udc3eKitalic_K and SNR settings",
|
| 188 |
+
"url": "http://arxiv.org/html/2403.11693v3/x4.png"
|
| 189 |
+
},
|
| 190 |
+
"5(a)": {
|
| 191 |
+
"figure_path": "2403.11693v3_figure_5(a).png",
|
| 192 |
+
"caption": "(a) eK\u22641subscript\ud835\udc52\ud835\udc3e1e_{K}\\leq 1italic_e start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT \u2264 1\nFigure 5: Objective Approximation with different surrogate functions",
|
| 193 |
+
"url": "http://arxiv.org/html/2403.11693v3/x5.png"
|
| 194 |
+
},
|
| 195 |
+
"5(b)": {
|
| 196 |
+
"figure_path": "2403.11693v3_figure_5(b).png",
|
| 197 |
+
"caption": "(b) eK>1subscript\ud835\udc52\ud835\udc3e1e_{K}>1italic_e start_POSTSUBSCRIPT italic_K end_POSTSUBSCRIPT > 1\nFigure 5: Objective Approximation with different surrogate functions",
|
| 198 |
+
"url": "http://arxiv.org/html/2403.11693v3/x6.png"
|
| 199 |
+
},
|
| 200 |
+
"6(a)": {
|
| 201 |
+
"figure_path": "2403.11693v3_figure_6(a).png",
|
| 202 |
+
"caption": "(a) Low SNR case (SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB)\nFigure 6: Performance comparison of transmission systems",
|
| 203 |
+
"url": "http://arxiv.org/html/2403.11693v3/x7.png"
|
| 204 |
+
},
|
| 205 |
+
"6(b)": {
|
| 206 |
+
"figure_path": "2403.11693v3_figure_6(b).png",
|
| 207 |
+
"caption": "(b) High SNR case (SNR=5SNR5{\\rm SNR}=5roman_SNR = 5 dB)\nFigure 6: Performance comparison of transmission systems",
|
| 208 |
+
"url": "http://arxiv.org/html/2403.11693v3/x8.png"
|
| 209 |
+
},
|
| 210 |
+
"7": {
|
| 211 |
+
"figure_path": "2403.11693v3_figure_7.png",
|
| 212 |
+
"caption": "Figure 7: Validation of Semantic Rate Approximation",
|
| 213 |
+
"url": "http://arxiv.org/html/2403.11693v3/x9.png"
|
| 214 |
+
},
|
| 215 |
+
"8(a)": {
|
| 216 |
+
"figure_path": "2403.11693v3_figure_8(a).png",
|
| 217 |
+
"caption": "(a) SNR=0SNR0{\\rm SNR=0}roman_SNR = 0 dB\nFigure 8: Performance comparison of different beamforming schemes",
|
| 218 |
+
"url": "http://arxiv.org/html/2403.11693v3/x10.png"
|
| 219 |
+
},
|
| 220 |
+
"8(b)": {
|
| 221 |
+
"figure_path": "2403.11693v3_figure_8(b).png",
|
| 222 |
+
"caption": "(b) \u03b2i=1,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc561for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=1,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1 , \u2200 italic_i = 1 , \u2026 , italic_B\nFigure 8: Performance comparison of different beamforming schemes",
|
| 223 |
+
"url": "http://arxiv.org/html/2403.11693v3/x11.png"
|
| 224 |
+
},
|
| 225 |
+
"9(a)": {
|
| 226 |
+
"figure_path": "2403.11693v3_figure_9(a).png",
|
| 227 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 228 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/origin/0.png"
|
| 229 |
+
},
|
| 230 |
+
"9(b)": {
|
| 231 |
+
"figure_path": "2403.11693v3_figure_9(b).png",
|
| 232 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 233 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/ZF/0.png"
|
| 234 |
+
},
|
| 235 |
+
"9(c)": {
|
| 236 |
+
"figure_path": "2403.11693v3_figure_9(c).png",
|
| 237 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 238 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/MRT/0.png"
|
| 239 |
+
},
|
| 240 |
+
"9(d)": {
|
| 241 |
+
"figure_path": "2403.11693v3_figure_9(d).png",
|
| 242 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 243 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/WMMSE/0.png"
|
| 244 |
+
},
|
| 245 |
+
"9(e)": {
|
| 246 |
+
"figure_path": "2403.11693v3_figure_9(e).png",
|
| 247 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 248 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/LP_MM_FP/0.png"
|
| 249 |
+
},
|
| 250 |
+
"9(f)": {
|
| 251 |
+
"figure_path": "2403.11693v3_figure_9(f).png",
|
| 252 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 253 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/SCA_FP/0.png"
|
| 254 |
+
},
|
| 255 |
+
"9(g)": {
|
| 256 |
+
"figure_path": "2403.11693v3_figure_9(g).png",
|
| 257 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 258 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/origin/2.png"
|
| 259 |
+
},
|
| 260 |
+
"9(h)": {
|
| 261 |
+
"figure_path": "2403.11693v3_figure_9(h).png",
|
| 262 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 263 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/ZF/2.png"
|
| 264 |
+
},
|
| 265 |
+
"9(i)": {
|
| 266 |
+
"figure_path": "2403.11693v3_figure_9(i).png",
|
| 267 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 268 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/MRT/2.png"
|
| 269 |
+
},
|
| 270 |
+
"9(j)": {
|
| 271 |
+
"figure_path": "2403.11693v3_figure_9(j).png",
|
| 272 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 273 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/WMMSE/2.png"
|
| 274 |
+
},
|
| 275 |
+
"9(k)": {
|
| 276 |
+
"figure_path": "2403.11693v3_figure_9(k).png",
|
| 277 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 278 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/LP_MM_FP/2.png"
|
| 279 |
+
},
|
| 280 |
+
"9(l)": {
|
| 281 |
+
"figure_path": "2403.11693v3_figure_9(l).png",
|
| 282 |
+
"caption": "Figure 9: Examples of reconstructed images under SNR=0SNR0{\\rm SNR}=0roman_SNR = 0 dB, \u03b2i=0.8,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefd\ud835\udc560.8for-all\ud835\udc561\u2026\ud835\udc35\\beta_{i}=0.8,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.8 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 283 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_0_qos_0.8/select/SCA_FP/2.png"
|
| 284 |
+
},
|
| 285 |
+
"10(a)": {
|
| 286 |
+
"figure_path": "2403.11693v3_figure_10(a).png",
|
| 287 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 288 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/origin/1.png"
|
| 289 |
+
},
|
| 290 |
+
"10(b)": {
|
| 291 |
+
"figure_path": "2403.11693v3_figure_10(b).png",
|
| 292 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 293 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/ZF/1.png"
|
| 294 |
+
},
|
| 295 |
+
"10(c)": {
|
| 296 |
+
"figure_path": "2403.11693v3_figure_10(c).png",
|
| 297 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 298 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/MRT/1.png"
|
| 299 |
+
},
|
| 300 |
+
"10(d)": {
|
| 301 |
+
"figure_path": "2403.11693v3_figure_10(d).png",
|
| 302 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 303 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/WMMSE/1.png"
|
| 304 |
+
},
|
| 305 |
+
"10(e)": {
|
| 306 |
+
"figure_path": "2403.11693v3_figure_10(e).png",
|
| 307 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 308 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/SCA_FP/1.png"
|
| 309 |
+
},
|
| 310 |
+
"10(f)": {
|
| 311 |
+
"figure_path": "2403.11693v3_figure_10(f).png",
|
| 312 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 313 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/SCA_FP/1.png"
|
| 314 |
+
},
|
| 315 |
+
"10(g)": {
|
| 316 |
+
"figure_path": "2403.11693v3_figure_10(g).png",
|
| 317 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 318 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/origin/2.png"
|
| 319 |
+
},
|
| 320 |
+
"10(h)": {
|
| 321 |
+
"figure_path": "2403.11693v3_figure_10(h).png",
|
| 322 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 323 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/ZF/2.png"
|
| 324 |
+
},
|
| 325 |
+
"10(i)": {
|
| 326 |
+
"figure_path": "2403.11693v3_figure_10(i).png",
|
| 327 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 328 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/MRT/2.png"
|
| 329 |
+
},
|
| 330 |
+
"10(j)": {
|
| 331 |
+
"figure_path": "2403.11693v3_figure_10(j).png",
|
| 332 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 333 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/WMMSE/2.png"
|
| 334 |
+
},
|
| 335 |
+
"10(k)": {
|
| 336 |
+
"figure_path": "2403.11693v3_figure_10(k).png",
|
| 337 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 338 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/SCA_FP/2.png"
|
| 339 |
+
},
|
| 340 |
+
"10(l)": {
|
| 341 |
+
"figure_path": "2403.11693v3_figure_10(l).png",
|
| 342 |
+
"caption": "Figure 10: Examples of reconstructed images under SNR=3dB, \u03b2bi=1.0,\u2200i=1,\u2026,Bformulae-sequencesubscript\ud835\udefdsubscript\ud835\udc4f\ud835\udc561.0for-all\ud835\udc561\u2026\ud835\udc35\\beta_{b_{i}}=1.0,\\forall i=1,...,Bitalic_\u03b2 start_POSTSUBSCRIPT italic_b start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 1.0 , \u2200 italic_i = 1 , \u2026 , italic_B.",
|
| 343 |
+
"url": "http://arxiv.org/html/2403.11693v3/extracted/5870295/Figures/beamforming_comparison_snr_3_qos_1.0/select/SCA_FP/2.png"
|
| 344 |
+
},
|
| 345 |
+
"11": {
|
| 346 |
+
"figure_path": "2403.11693v3_figure_11.png",
|
| 347 |
+
"caption": "Figure 11: Performance comparison of different K\ud835\udc3eKitalic_K-setting",
|
| 348 |
+
"url": "http://arxiv.org/html/2403.11693v3/x12.png"
|
| 349 |
+
}
|
| 350 |
+
},
|
| 351 |
+
"validation": true,
|
| 352 |
+
"references": [
|
| 353 |
+
{
|
| 354 |
+
"1": {
|
| 355 |
+
"title": "University of illinois Press, 1949.",
|
| 356 |
+
"author": "C. E. Shannon and W. Weaver, The mathematical theory of communication.",
|
| 357 |
+
"venue": null,
|
| 358 |
+
"url": null
|
| 359 |
+
}
|
| 360 |
+
}
|
| 361 |
+
],
|
| 362 |
+
"url": "http://arxiv.org/html/2403.11693v3"
|
| 363 |
+
}
|
20240921/2403.17765v3.json
ADDED
|
@@ -0,0 +1,257 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "MUTE-SLAM: Real-Time Neural SLAM with Multiple Tri-Plane Hash Representations",
|
| 3 |
+
"abstract": "We introduce MUTE-SLAM, a real-time neural RGB-D SLAM system employing multiple tri-plane hash-encodings for efficient scene representation. MUTE-SLAM effectively tracks camera positions and incrementally builds a scalable multi-map representation for both small and large indoor environments. As previous methods often require pre-defined scene boundaries, MUTE-SLAM dynamically allocates sub-maps for newly observed local regions, enabling constraint-free mapping without prior scene information. Unlike traditional grid-based methods, we use three orthogonal axis-aligned planes for hash-encoding scene properties, significantly reducing hash collisions and the number of trainable parameters. This hybrid approach not only ensures real-time performance but also enhances the fidelity of surface reconstruction. Furthermore, our optimization strategy concurrently optimizes all sub-maps intersecting with the current camera frustum, ensuring global consistency. Extensive testing on both real-world and synthetic datasets has shown that MUTE-SLAM delivers state-of-the-art surface reconstruction quality and competitive tracking performance across diverse indoor settings. The code is available at https://github.com/lumennYan/MUTE_SLAM.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Dense Simultaneous Localization and Mapping (SLAM) has been a fundamental challenge in 3D computer vision for decades, playing a crucial role in applications like robotics, virtual/augmented reality, and autonomous driving. A robust dense SLAM system equipped with RGB-D sensors needs to track camera poses effectively while reconstructing the environment into a high-fidelity map.\nTraditional dense SLAM methods [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###] generate accurate localization results and detailed 3D point positions, but they fall short in rendering novel views or filling unobserved regions. Learning-based systems [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###] have shown promise in large-scale scenes and global 3D map production, yet their reconstruction performance is limited and requires retraining for different scenarios.\nWith the advent of Neural Radiance Field (NeRF) [13 ###reference_b13###], efforts have been made to integrate it into SLAM systems due to its capability to render novel views [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###] and reconstruct 3D surfaces [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###]. NeRF-based SLAM methods iMAP [22 ###reference_b22###] and NICE-SLAM [23 ###reference_b23###], have demonstrated their applicability across various scenes and ability to predict the appearance of unobserved areas, although their computational demands hinder real-time application. Recent works [24 ###reference_b24###, 25 ###reference_b25###] utilize hash-encoded voxel grids [26 ###reference_b26###] to accelerate convergence and enhance detail fidelity. Despite hash collisions can be implicitly mitigated by the original design of Instant-NGP [26 ###reference_b26###], the reconstructed mesh still suffers from aliasing. As proved in [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###, 29 ###reference_b29###], projecting spatial points into a tri-plane reprensentation can efficiently reduce scene parameters while preserving geometric and color information. Therefore following [30 ###reference_b30###], we leverage a tri-plane hash-encoding method to store scene features and minimize hash collisions. Moreover, current Nerf-based SLAM systems [22 ###reference_b22###, 23 ###reference_b23###, 28 ###reference_b28###, 24 ###reference_b24###, 31 ###reference_b31###] require pre-set scene boundaries, limiting their application in unknown environments. Some [25 ###reference_b25###, 32 ###reference_b32###] address this with octree-based voxel grids, but they still necessitate an initially defined loose boundary and struggle to reconstruct beyond these limits. Although neural point cloud-based method [33 ###reference_b33###] does not have such a concern, the point cloud-based representation requires large time and memory consumption, making it impractical for real-time usage. Our proposed MUTE-SLAM overcomes this by introducing a multi-map-based scene representation. Through dynamically allocating new sub-maps upon detecting new areas, MUTE-SLAM can be deployed in environments of any size, given reasonable RGB-D sensor observations, while maintaining reasonable runtime and memory overhead.\nIn summary, our contributions include:\nA multi-map-based scene representation facilitating reconstruction scalable to diverse indoor scenarios.\nA tri-plane hash-encoding method for sub-maps which enables real-time tracking and anti-aliasing dense mapping with high-fidelity details.\nA optimization strategy that jointly optimizes all sub-maps observed currently, ensuring global consistency.\nExtensive experimental validation on various datasets, demonstrating our system\u2019s scalability and effectiveness in both tracking and mapping.\n###figure_1###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related Works",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Dense Visual SLAM",
|
| 21 |
+
"text": "DTAM [2 ###reference_b2###] is the first dense SLAM system to employ direct methods, utilizing information from all pixels for tracking by comparing newly inputted RGB frames with a reconstructed dense model. KinectFusion [3 ###reference_b3###], leveraging a RGB-D sensor, uses a volumetric Truncated Signed Distance Function (TSDF) to fuse scene geometry and tracks camera positions via Iterative Closest Point (ICP). Subsequent works have focused on improving scalability through new data structures [7 ###reference_b7###, 4 ###reference_b4###], enhancing global consistency with Bundle Adjustment (BA) [9 ###reference_b9###, 8 ###reference_b8###], and increasing efficiency [5 ###reference_b5###]. Recent learning-based methods [11 ###reference_b11###, 12 ###reference_b12###, 10 ###reference_b10###] demonstrate superior accuracy and robustness compared to traditional approaches on single scenes, but they struggle with generalization across varied scenes."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Neural Implicit SLAM",
|
| 27 |
+
"text": "iMAP [22 ###reference_b22###] is the first SLAM system to incorporate NeRF [13 ###reference_b13###], it models the environment into a single Multilayer Perceptron (MLP) and jointly optimizes the map and camera poses. NICE-SLAM [23 ###reference_b23###] substitutes the scene representation in iMAP with hierarchical voxel girds to resolve the forgetting issue, achieving enhanced tracking and mapping in large indoor environments. However, both approaches face limitations in handling unknown environments due to the requirement for a prior scene boundary. Vox-Fusion [32 ###reference_b32###] attempts to address this by introducing octree-based voxel grids as the map representation but is still limited to the initially defined spatial scope.\nRecent advancements in NeRF-based SLAM have improved tracking, mapping, and running speed. ESLAM [28 ###reference_b28###] stores features on multi-scale axis-aligned planes and employs rendering based on TSDF. Co-SLAM [24 ###reference_b24###] combines coordinate and hash-encodings for input points to achieve both smooth meshes and fast convergence, and introduces a real-time global bundle adjustment mechanism using a ray list sampled from past keyframes. GO-SLAM [31 ###reference_b31###] and H2-mapping [25 ###reference_b25###] combine traditional SLAM modules with neural mapping, achieving high tracking accuracy. Despite these improvements, scalability remains a challenge. Our work addresses this by introducing a multi-map solution with an accompanying optimization strategy, which requires no pre-set boundaries. Besides, we represent each sub-map with a tri-plane hash-encoding which allows for fast convergence and detailed surface reconstruction.\n###figure_2###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III Method",
|
| 33 |
+
"text": "The overview of MUTE-SLAM is illustrated in fig. 2 ###reference_###. Given an input RGB-D stream , the system tracks their 6-DOF camera poses and conducts mapping to optimize a multi-map implicit scene representation .\nWhen the camera captures a new region, a corresponding sub-map is created to represent it, as detailed in section III-A ###reference_###.\nEach local map encodes a point coordinate within its domin with three orthogonal TSDF feature planes and three color planes, as described in section III-B ###reference_###.\nThese features from the sub-maps are decoded into TSDF and color using two separate MLPs, initiating the volume rendering process (section III-C ###reference_###). The rendered depth and color images are subsequently utilized to jointly optimize the sub-maps and camera poses. Additionally, periodic global bundle adjustments are implemented to ensure global consistency(section III-D ###reference_###)."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-A Multi-map Scene Representation",
|
| 39 |
+
"text": "As previous neural implicit SLAM methods [22 ###reference_b22###, 23 ###reference_b23###, 32 ###reference_b32###, 28 ###reference_b28###, 24 ###reference_b24###] are restricted to functioning within pre-defined scene boundaries, they are unsuitable for navigating and mapping large, unknown indoor environments. Consequently, performing tracking and mapping incrementally with no prior environment information becomes a critical issue. MUTE-SLAM addresses this problem by adopting a multi-map scene representation approach. We encode the whole scene with several sub-maps , each to express a local region, enabling the reconstruction of indoor scenes of arbitrary shapes and sizes.\nAfter tracking an input RGB-D frame, points are randomly sampled from the depth image and projected into the world coordinate system with the estimated camera pose . Points with invalid depths are filtered out, and outliers are removed to mitigate noise. If the proportion of points that fall outside all existing sub-maps exceeds a predetermined threshold , a new local map is generated. The local map\u2019s size is determined by extending the cuboid vicinity of the current camera position and the points that are out of bounds over a length of , which is a hyperparameter. The redundancy of a sub-map\u2019s boundary would reduce the number of total sub-maps, thereby lowering memory consumption. Concurrently, the corresponding frame is added to the global keyframe database. For optimization, rays are sampled from the current frame and co-visible keyframes. Rays terminating outside all sub-maps are removed from the training process. Specifically, rather than optimizing a single sub-map at a time, we simultaneously optimize all observed sub-maps to ensure global consistency as a input frame\u2019s frustum may intersect with multiple sub-maps. Global bundle adjustments are also employed periodically to further enhance global consistency."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-B Tri-plane Hash Encoding",
|
| 45 |
+
"text": "Lately, hash-encoding [26 ###reference_b26###] has gained much attention in the NeRF community [16 ###reference_b16###, 25 ###reference_b25###, 21 ###reference_b21###, 24 ###reference_b24###] due to its fast convergence and strong environmental representation capabilities. Despite that, hash collisions are inevitable and could lead to artifacts in the reconstructed scenes. The standard solution is to let the light MLP decoder handle hash collisions implicitly or use larger hash tables. But as the scene grows large and complicated, the MLP will reach its limit and a capacious hash table consumes substantial memory space. Meanwhile, tri-plane encoding approaches [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###, 29 ###reference_b29###] have demonstrated competence in surface and appearance reconstruction with low memory consumption. Combining the advantages of both worlds, we represent each sub-map by tri-plane hash-encoding.\nIn MUTE-SLAM, a local map is encoded by three orthogonal planes for TSDF and another three for color, each plane denotes a 2D hash encoder as in [26 ###reference_b26###]. All planes share the same resolution levels , base resolution , finest resolution , per level feature dimension and hash table size . The finest resolution and hash table size are determined by the local map volume :\nOther parameters are set as hyperparameters. When a point falls within a sub-map, it is orthogonally projected onto the corresponding planes. The encoder then interpolates the point features bilinearly by querying the nearest four vertices from each level of the hash table, concatenating features across all levels to produce the final output. Consequently, the TSDF and color feature vectors are derived by summing up the outputs from the three planes:\nWe employ two separate double-layer MLPs to decode the TSDF and RGB values respectively:\nThe TSDF and color values are then utilized in the volume rendering module.\nThe plane-based representation, growing quadratically with scene size, results in fewer hash table queries and hence fewer collisions compared to grid representations, given equal hash table sizes. Furthermore, since a point\u2019s feature vector is a composite of inputs from three distinct encoders, the probability of encountering conflicts across all encoders is significantly reduced. In situations where collisions do occur in one encoder, the impact is mitigated by the inputs from the other encoders, thus lessening the overall adverse effect. section IV-D ###reference_### demonstrates the effectiveness of our tri-plane hash-encoding."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-C TSDF-based Volume Rendering",
|
| 51 |
+
"text": "MUTE-SLAM follows the TSDF-based rendering procedure in [28 ###reference_b28###]. For an input frame , we sample randomly from pixels with valid depths, casting rays into the world coordinate system using the estimated pose . We apply stratified sampling to acquire points along a ray, distributed between the and bounds. Initially, points are uniformly sampled across the entire sampling region. Then within a smaller truncated distance near the surface, extra points are sampled uniformly, where denotes the ground truth depth value.\nEach sampled point is represented by the ray\u2019s origin , direction and depth . For all points along a ray, we predict the color and depth with color and TSDF values retrieved from MLPs:\nFor , as in [19 ###reference_b19###], it is derived from the volume density and TSDF value :\nHere, is a learnable parameter which controls the sharpness of surfaces."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.4",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-D Tracking and Mapping",
|
| 57 |
+
"text": ""
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.4.1",
|
| 61 |
+
"parent_section_id": "3.4",
|
| 62 |
+
"section_name": "III-D1 Loss Functions.",
|
| 63 |
+
"text": "We apply four loss functions to optimize the scene representation, MLPs and camera poses: RGB loss, depth loss, TSDF loss and free-space loss. Once a batch of rays are selected, the RGB and depth loss are obtained as errors between rendered and ground truth values:\nThe free-space loss is applied to supervise the points far from the surfaces () to have a truncated TSDF value of :\nFor points near the surface (), similar to [28 ###reference_b28###], we further split them into two parts to obtain the TSDF loss:\nWhere the middle points with depths reside in have a larger weight , while the others have a smaller weight .\nThe final loss is the weighted sum of the objective functions above:"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.4.2",
|
| 67 |
+
"parent_section_id": "3.4",
|
| 68 |
+
"section_name": "III-D2 Tracking.",
|
| 69 |
+
"text": "We track the camera-to-world transformation matrix for every input frame . When receiving a input frame , its initial pose is obtained using constant speed assumption:\nThen, the pose is transformed into a seven-dimensional vector for optimization, which is formed by concatenating the rotation quaternion and the translation vector . We sample uniformly pixels from frame and optimize the pose iteratively using all loss functions, while keeping the scene parameters and MLPs fixed."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.4.3",
|
| 73 |
+
"parent_section_id": "3.4",
|
| 74 |
+
"section_name": "III-D3 Mapping.",
|
| 75 |
+
"text": "MUTE-SLAM performs mapping every frames and inserts the mapped frame as a keyframe into the global keyframe database. When the mapping thread starts, we first sample rays from the current frame and keyframes having co-visibility with current frame. Then we filter out rays whose bounds lay outside all sub-maps. Each point on the rays is encoded within the corresponding sub-map. Since we define loose boundaries for sub-maps, only the oldest map is used when points fall into the area where multiple sub-maps overlap. At last, we jointly optimize all observed sub-maps, MLPs and camera poses iteratively with the objective functions. Specifically, we use the ground truth pose at the first input frame and only optimize scene parameters and MLPs for initialization."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.4.4",
|
| 79 |
+
"parent_section_id": "3.4",
|
| 80 |
+
"section_name": "III-D4 Bundle Adjustment.",
|
| 81 |
+
"text": "Once the keyframe database has accumulated a sufficient number of frames, global bundle adjustment is initiated for every twenty frames of input. From the keyframe database, frames are globally sampled, with all trainable parameters optimized in a manner akin to the mapping thread. The global bundle adjustment module plays a crucial role in correcting drifting poses and bolstering global consistency."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "IV Experiments.",
|
| 87 |
+
"text": ""
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.1",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "IV-A Experimental Setup.",
|
| 93 |
+
"text": ""
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.1.1",
|
| 97 |
+
"parent_section_id": "4.1",
|
| 98 |
+
"section_name": "IV-A1 Baselines",
|
| 99 |
+
"text": "We choose state-of-art NeRF-based dense SLAM approaches ESLAM [28 ###reference_b28###], Co-SLAM [24 ###reference_b24###] and Point-SLAM [33 ###reference_b33###] as our main baselines for both surface reconstruction and camera tracking. To better evaluate our proposed MUTE-SLAM on pose estimation, we also compare with previous methods NICE-SLAM [23 ###reference_b23###] and Vox-Fusion [32 ###reference_b32###]. We run these methods using the default settings provided in their open-source code. We do not compare with[25 ###reference_b25###] and [31 ###reference_b31###] as they both use traditional SLAM modules for camera poses estimation."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.1.2",
|
| 103 |
+
"parent_section_id": "4.1",
|
| 104 |
+
"section_name": "IV-A2 Datasets",
|
| 105 |
+
"text": "We evaluate MUTE-SLAM on various 3D benchmarks of indoor scenarios. For quantitative evaluation of the reconstruction quality, we use 8 synthetic scenes from Replica [35 ###reference_b35###]. To validate effectiveness on pose tracking, we conduct experiments on 6 real-world scenes from ScanNet [36 ###reference_b36###] and 3 real-world scenes from TUM-RGBD [34 ###reference_b34###] dataset. We also demonstrate the scalability of MUTE-SLAM on large-scale Apartment dataset provided by NICE-SLAM [23 ###reference_b23###]."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.1.3",
|
| 109 |
+
"parent_section_id": "4.1",
|
| 110 |
+
"section_name": "IV-A3 Metrics",
|
| 111 |
+
"text": "For surface reconstruction, We adopt four evaluation metrics: , , and . Additionally, to underscore the ability of our method to produce detailed geometry compared to ESLAM [28 ###reference_b28###], we also incorporate the metric. Following [18 ###reference_b18###, 37 ###reference_b37###, 28 ###reference_b28###], before evaluation, we remove faces that are not inside any camera frustum or are occluded in all RGB-D frames from the reconstructed mesh. For the evaluation of camera tracking, we employ [34 ###reference_b34###]."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.1.4",
|
| 115 |
+
"parent_section_id": "4.1",
|
| 116 |
+
"section_name": "IV-A4 Implementation Details",
|
| 117 |
+
"text": "We run all experiments on a desktop PC with a 3.70GHz Intel i9-10900K CPU and an NVIDIA RTX 3080 GPU. For local map creation, the threshold is set to 0.2 for Replica [35 ###reference_b35###] and 0.25 for the other datasets, while the expanding size is 1 m for Replica [35 ###reference_b35###], 1.5 m for ScanNet [36 ###reference_b36###], 2.5m for Apartment [23 ###reference_b23###] and 3 m for TUM-RGBD [34 ###reference_b34###]. Each hash encoder has the same base resolution , resolution levels , per level feature dimension , resulting in 32 dimensions of input for MLPs. The MLPs both have two hidden layers of 32 channels. For rendering, we set the near-surface truncated distance to 6 cm and regular sampling number to 32. Particularly, we sample near-surface points for Replica [35 ###reference_b35###] and TUM-RGBD [34 ###reference_b34###], and points for ScanNet [36 ###reference_b36###] and Apartment [23 ###reference_b23###]. Please refer to the supplementary materials for further details of our implementation."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.2",
|
| 121 |
+
"parent_section_id": "4",
|
| 122 |
+
"section_name": "IV-B Evaluation of Mapping and Tracking",
|
| 123 |
+
"text": ""
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "4.2.1",
|
| 127 |
+
"parent_section_id": "4.2",
|
| 128 |
+
"section_name": "IV-B1 Evaluation on Replica [35]",
|
| 129 |
+
"text": "We compare the reconstruction performance on Replica [35 ###reference_b35###] only with Co-SLAM [24 ###reference_b24###], Point-SLAM [33 ###reference_b33###] and ESLAM [28 ###reference_b28###] as they significantly outperform previous methods [22 ###reference_b22###, 23 ###reference_b23###, 32 ###reference_b32###]. However, the origin setting of Point-SLAM takes hours on Replica, which is unfair for other baselines as they only require a few minutes. Thus we modified the number of rays to 2000 and iterations to 10 for Point-SLAM, and denote the modified version as . Note that [sandstrom2023poin] still takes two times longer than other methods. For quantitative analysis, we run each method five times and report the average results. As shown in table I ###reference_###, our approach outperforms Co-SLAM [24 ###reference_b24###] and [33 ###reference_b33###] on all scenes and shows competitive performance with ESLAM [28 ###reference_b28###]. Due to the use of a joint coordinate and parametric encoding, Co-SLAM [24 ###reference_b24###] tends to produce over-smoothed surfaces, which leads to the amplification of reconstruction error. Point-SLAM [33 ###reference_b33###] can reconstruct fine scene contents as in the origin setting, but is impractical in real-time use. Although ESLAM [28 ###reference_b28###] achieves high overall accuracy, it falls short in preserving surface details. To further highlight our method\u2019s superiority in capturing scene details, we compare the with ESLAM [28 ###reference_b28###] in table II ###reference_###. Qualitative results in fig. 3 ###reference_### also demonstrate that MUTE-SLAM effectively reconstructs detailed environmental geometry with fewer artifacts. Because the modified version [sandstrom2023poin] performs poorly, we do not show its qualitative results.\n###figure_3###"
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "4.2.2",
|
| 133 |
+
"parent_section_id": "4.2",
|
| 134 |
+
"section_name": "IV-B2 Evaluation on ScanNet[36]",
|
| 135 |
+
"text": "###figure_4### We assessed camera tracking accuracy on six real-world scenes from the ScanNet dataset [36 ###reference_b36###] and report the average ATE RMSE [34 ###reference_b34###] across five runs for each scene and method in table III ###reference_###. Our approach, MUTE-SLAM, exhibits competitive results in these tests. Notably, even without pre-defined scene boundaries, MUTE-SLAM consistently outperforms Co-SLAM [24 ###reference_b24###] in all tested scenes. While ESLAM [28 ###reference_b28###] achieves the best overall performance, it does so with twice the average processing time as ours (table VI ###reference_###). Our approach not only surpasses ESLAM in several scenes but also secures the second-best overall result.Due to the incomplete nature of ScanNet [36 ###reference_b36###] meshes, we present only qualitative reconstruction results in fig. 4 ###reference_###. These results highlight MUTE-SLAM\u2019s ability to capture finer details and achieve a high level of completeness in reconstructions."
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "4.2.3",
|
| 139 |
+
"parent_section_id": "4.2",
|
| 140 |
+
"section_name": "IV-B3 Evaluation on TUM RGB-D [34]",
|
| 141 |
+
"text": "To further evaluate the tracking accuracy of MUTE-SLAM, we conducted experiments on real-world scenes from the TUM RGB-D dataset [34 ###reference_b34###], with results averaged over five runs. Instance of failure is denoted as \u2018N/A\u2019. Noted that ESLAM [28 ###reference_b28###] runs unsuccessfully on the \u2019fr2/xyz\u2019 scene and has been consequently excluded from these comparative results. As shown in table IV ###reference_###, our quantitative analysis reveals that MUTE-SLAM not only outperforms NICE-SLAM [23 ###reference_b23###] and Co-SLAM [24 ###reference_b24###] but also demonstrates competitive performance and superior robustness compared to ESLAM [28 ###reference_b28###], which takes hours to run on this dataset while ours only takes a few minutes. We also compare with some traditional SLAM methods [9 ###reference_b9###, 5 ###reference_b5###, 38 ###reference_b38###] on this dataset. While NeRF-based SLAM methods still lag behind them, MUTE-SLAM narrows the gap."
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "4.2.4",
|
| 145 |
+
"parent_section_id": "4.2",
|
| 146 |
+
"section_name": "IV-B4 Evaluation on Apartment [23]",
|
| 147 |
+
"text": "To demonstrate the effectiveness of our method in large scale indoor scenarios, we evaluate the tracking and surface reconstruction performance on Apartment dataset provided by NICE-SLAM [23 ###reference_b23###]. The failed instance is denoted as N/A. As illustrated in table V ###reference_###, our method yields reasonable tracking performance. It should be emphasized that our method runs the fastest on this dataset, as discussed in section IV-C ###reference_###."
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"section_id": "4.3",
|
| 151 |
+
"parent_section_id": "4",
|
| 152 |
+
"section_name": "IV-C Performance Analysis",
|
| 153 |
+
"text": "We conducted a comparative analysis of the speed and memory consumption between our proposed MUTE-SLAM and the state-of-the-art methods ESLAM [28 ###reference_b28###] and Co-SLAM [24 ###reference_b24###]. Evaluations were performed on diverse scales of scenes: the small-scale \u2019room0\u2019 from Replica [35 ###reference_b35###], the mid-scale \u20190000\u2019 from ScanNet [36 ###reference_b36###], and the large-scale Apartment scene from NICE-SLAM [23 ###reference_b23###]. Our metrics included average frame processing time (FPT) and the model\u2019s parameter count. As shown in table VI ###reference_###, MUTE-SLAM not only operates faster in large scale scenes but also requires less memory compared to ESLAM [28 ###reference_b28###]. Notably, in large-scale scenarios like Apartment [23 ###reference_b23###], MUTE-SLAM achieves even speed advantages over Co-SLAM [24 ###reference_b24###]. Moreover, the FPT and memory usage of MUTE-SLAM remain relatively stable across scene sizes, a benefit attributable to our scene representation design. Although Co-SLAM\u2019s [24 ###reference_b24###] coordinate hash encoding reduces runtime and memory usage, its smoothing effect hinders detailed scene reconstruction. Therefore, coordinate hash encoding is preferable when fine-grained reconstructions are unnecessary, whereas our tri-plane hash offers a better balance of overhead and performance for detailed needs."
|
| 154 |
+
},
|
| 155 |
+
{
|
| 156 |
+
"section_id": "4.4",
|
| 157 |
+
"parent_section_id": "4",
|
| 158 |
+
"section_name": "IV-D Ablations",
|
| 159 |
+
"text": "###figure_5###"
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"section_id": "4.4.1",
|
| 163 |
+
"parent_section_id": "4.4",
|
| 164 |
+
"section_name": "IV-D1 Multi-map representation",
|
| 165 |
+
"text": "We conducted ablation experiments on the Replica [35 ###reference_b35###] and ScanNet [36 ###reference_b36###] datasets to evaluate the impact of various components of our design. The quantitative results of this study are detailed in table VII ###reference_###. Representing the scene with one map, our findings indicate that the multi-map representation improves tracking performance. This enhancement in tracking accuracy, in turn, leads to higher quality in the reconstruction process. This improvement can be attributed to our strategy for allocating submaps. By extending the boundaries of each submap over a defined length, the corresponding hash tables are able to attain larger sizes, which contributes to better overall system performance. To further prove that ill-set scene boundaries harm performances of mapping and tracking, we conducted a experiment on the Replica [35 ###reference_b35###] \u2019room0\u2019 sequence for all baselines in the need of pre-set scene boundaries as in table VIII ###reference_###. To simulate the scenario where the camera exits the pre-set boundary, we modify the boundary to a cube with a side length of 10m for all baselines (except for Vox-Fusion due to its implementation) and report the \u2019origin / ill-set\u2019 results. Note that NICE-SLAM fails on this setting. The results indicate significant degradation for all baselines, meanwhile our multi-map representation will not encounter such a problem."
|
| 166 |
+
},
|
| 167 |
+
{
|
| 168 |
+
"section_id": "4.4.2",
|
| 169 |
+
"parent_section_id": "4.4",
|
| 170 |
+
"section_name": "IV-D2 Tri-plane hash-encoding",
|
| 171 |
+
"text": "To assess the effectiveness of tri-plane hash-encoding, we conducted an experiment where we replaced the tri-plane in each hash-encoding with a grid, while simultaneously tripling the maximum hash table size . This adjustment marginally increases the overall capacity of the hash tables compared to the tri-plane approach. table VII ###reference_### shows that tri-plane hash-encoding achieves superior tracking results, a higher completion ratio, and better-rendered depth images. Although grid hash-encoding excels in terms of accuracy and completion, it leads to artifacts in the reconstructed mesh due to hash collisions, which in turn affects tracking accuracy. As illustrated in fig. 5 ###reference_###, our qualitative comparison demonstrates that our proposed tri-plane hash-encoding effectively reduces aliasing and preserves scene details more accurately."
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"section_id": "4.4.3",
|
| 175 |
+
"parent_section_id": "4.4",
|
| 176 |
+
"section_name": "IV-D3 Global bundle adjustment",
|
| 177 |
+
"text": "As illustrated in table VII ###reference_###, the lack of global bundle adjustment leads to higher ATE errors and relatively low reconstruction performance. As global bundle adjustment corrects drifting poses and refines scene representation, it plays a critical role in ensuring robustness and global consistency in our method."
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"section_id": "5",
|
| 181 |
+
"parent_section_id": null,
|
| 182 |
+
"section_name": "Conclusion",
|
| 183 |
+
"text": "We presented MUTE-SLAM, a dense real-time neural RGB-D SLAM system utilizing multiple tri-plane hash-encodings as scene representation. We demonstrate that utilizing several sub-maps to express the scene ensures scalability, making our method applicable to various indoor scenarios. We also show that integrating tri-plane with hash-encoding diminishes hash collisions and trainable parameters, producing high-fidelity surface reconstruction and low memory usage. Moreover, we perform global bundle adjustment periodically to achieve accurate poses estimation and maintain global consistency."
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"section_id": "6",
|
| 187 |
+
"parent_section_id": null,
|
| 188 |
+
"section_name": "VI Limitations.",
|
| 189 |
+
"text": "Our method relies on the valid observation of RGB-D sensors, thus is susceptible to illumination changes and inaccurate depth measurements. Additionally, our approach of randomly sampling from all historical keyframes for global bundle adjustment might result in insufficient optimization in less frequently observed regions, potentially compromising reconstruction quality in these areas."
|
| 190 |
+
}
|
| 191 |
+
],
|
| 192 |
+
"appendix": [],
|
| 193 |
+
"tables": {
|
| 194 |
+
"1": {
|
| 195 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Quantitative results of reconstruction on Replica <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.17765v3#bib.bib35\" title=\"\">35</a>]</cite> dataset.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T1.4.4.5\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.5.1\" style=\"font-size:70%;\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.1.1.1.1\" style=\"font-size:70%;\">Depth L1 (cm) </span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.2.2.2\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.2.2.2.1\" style=\"font-size:70%;\">Acc. (cm) </span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.3.3.3\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.3.3.3.1\" style=\"font-size:70%;\">Comp. (cm) </span>\n</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.4.4.4\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.4.4.4.1\" style=\"font-size:70%;\">Cp. Ratio (%) </span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.5.6.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.5.6.1.1\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.1.1\" style=\"font-size:70%;\">ESLAM</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.2\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.2.1\" style=\"font-size:70%;\">1.19</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.3\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.3.1\" style=\"font-size:70%;\">0.94</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.4\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.4.1\" style=\"font-size:70%;\">0.96</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T1.5.6.1.5\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.6.1.5.1\" style=\"font-size:70%;\">99.31</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.7.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.5.7.2.1\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.1.1\" style=\"font-size:70%;\">Co-SLAM</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.2\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.2.1\" style=\"font-size:70%;\">3.22</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.3\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.3.1\" style=\"font-size:70%;\">1.18</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.7.2.4\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.4.1\" style=\"font-size:70%;\">1.12</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.5.7.2.5\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.7.2.5.1\" style=\"font-size:70%;\">98.49</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.5.5.1\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.5.2\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.2.1\" style=\"font-size:70%;\">8.39</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.5.3\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.3.1\" style=\"font-size:70%;\">3.72</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.5.4\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.4.1\" style=\"font-size:70%;\">2.10</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T1.5.5.5\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.5.1\" style=\"font-size:70%;\">94.09</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.8.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T1.5.8.3.1\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.8.3.1.1\" style=\"font-size:70%;\">Ours</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.8.3.2\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.5.8.3.2.1\" style=\"font-size:70%;\">1.18</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.8.3.3\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.5.8.3.3.1\" style=\"font-size:70%;\">0.91</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.5.8.3.4\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.5.8.3.4.1\" style=\"font-size:70%;\">0.95</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T1.5.8.3.5\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.5.8.3.5.1\" style=\"font-size:70%;\">99.34</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 196 |
+
"capture": "TABLE I: Quantitative results of reconstruction on Replica [35] dataset."
|
| 197 |
+
},
|
| 198 |
+
"2": {
|
| 199 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Comparison of with ESLAM<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.17765v3#bib.bib28\" title=\"\">28</a>]</cite>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T2.3.1.1.1\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.1.1.1\" style=\"width:19.9pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.1.1.1.1\" style=\"font-size:70%;\">Method</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.2\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.1.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.2.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.2.1.1.1\" style=\"font-size:70%;\">Room0</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.3\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.1.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.3.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.3.1.1.1\" style=\"font-size:70%;\">Room1</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.4\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.1.1.4.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.4.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.4.1.1.1\" style=\"font-size:70%;\">Room2</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.5\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.1.1.5.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.5.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.5.1.1.1\" style=\"font-size:70%;\">Office0</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.6\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.1.1.6.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.6.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.6.1.1.1\" style=\"font-size:70%;\">Office1</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.7\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.1.1.7.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.7.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.7.1.1.1\" style=\"font-size:70%;\">Office2</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.8\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.1.1.8.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.8.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.8.1.1.1\" style=\"font-size:70%;\">Office3</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.9\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.1.1.9.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.9.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.9.1.1.1\" style=\"font-size:70%;\">Office4</span></span>\n</span>\n</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.3.1.1.10\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.1.1.10.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.10.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.10.1.1.1\" style=\"font-size:70%;\">Avg.</span></span>\n</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.2.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.3.2.1.1\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.2.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.2.1.1.1.1\" style=\"width:19.9pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.2.1.1.1.1.1\" style=\"font-size:70%;\">ESLAM</span></span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.2.1.2\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.2.1.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.2.1.2.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.2.1.2.1.1.1\" style=\"font-size:70%;\">55.21</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.2.1.3\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.2.1.3.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.2.1.3.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.2.1.3.1.1.1\" style=\"font-size:70%;\">72.77</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.2.1.4\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.2.1.4.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.2.1.4.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.2.1.4.1.1.1\" style=\"font-size:70%;\">65.94</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.2.1.5\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.2.1.5.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.2.1.5.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.2.1.5.1.1.1\" style=\"font-size:70%;\">76.30</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.2.1.6\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.2.1.6.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.2.1.6.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.2.1.6.1.1.1\" style=\"font-size:70%;\">86.17</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.2.1.7\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.2.1.7.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.2.1.7.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.2.1.7.1.1.1\" style=\"font-size:70%;\">62.90</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.2.1.8\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.2.1.8.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.2.1.8.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.2.1.8.1.1.1\" style=\"font-size:70%;\">49.05</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S4.T2.3.2.1.9\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.2.1.9.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.2.1.9.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.2.1.9.1.1.1\" style=\"font-size:70%;\">54.55</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_justify ltx_border_t\" id=\"S4.T2.3.2.1.10\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.2.1.10.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.2.1.10.1.1\"><span class=\"ltx_text\" id=\"S4.T2.3.2.1.10.1.1.1\" style=\"font-size:70%;\">65.36</span></span>\n</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.2\">\n<th class=\"ltx_td ltx_align_justify ltx_align_top ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T2.3.3.2.1\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.1.1.1\" style=\"width:19.9pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.3.2.1.1.1.1\" style=\"font-size:70%;\">Ours</span></span>\n</span>\n</th>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S4.T2.3.3.2.2\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.2.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.2.2.1.1.1\" style=\"font-size:70%;\">56.29</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S4.T2.3.3.2.3\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.3.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.2.3.1.1.1\" style=\"font-size:70%;\">74.55</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S4.T2.3.3.2.4\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.4.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.4.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.2.4.1.1.1\" style=\"font-size:70%;\">66.62</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S4.T2.3.3.2.5\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.5.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.2.5.1.1.1\" style=\"font-size:70%;\">76.93</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S4.T2.3.3.2.6\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.6.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.6.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.2.6.1.1.1\" style=\"font-size:70%;\">88.61</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S4.T2.3.3.2.7\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.7.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.7.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.2.7.1.1.1\" style=\"font-size:70%;\">64.28</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S4.T2.3.3.2.8\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.8.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.8.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.2.8.1.1.1\" style=\"font-size:70%;\">49.51</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb\" id=\"S4.T2.3.3.2.9\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.9.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.9.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.2.9.1.1.1\" style=\"font-size:70%;\">55.67</span></span>\n</span>\n</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_justify ltx_border_bb\" id=\"S4.T2.3.3.2.10\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_inline-block ltx_align_top\" id=\"S4.T2.3.3.2.10.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.3.2.10.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.2.10.1.1.1\" style=\"font-size:70%;\">66.56</span></span>\n</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 200 |
+
"capture": "TABLE II: Comparison of with ESLAM[28]."
|
| 201 |
+
},
|
| 202 |
+
"3": {
|
| 203 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Quantitative results of ATE RMSE (cm) on ScanNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.17765v3#bib.bib36\" title=\"\">36</a>]</cite>. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T3.1.1.1.1\">SceneID</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.2\">0000</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.3\">0059</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.4\">0106</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.5\">0169</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.6\">0181</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.7\">0207</th>\n<th class=\"ltx_td ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1.8\">Avg.</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.1.2.1.1\">NICE-SLAM</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.1.2\">8.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.1.3\">12.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.1.4\">8.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.1.5\">10.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.1.6\">13.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.1.7\">5.55</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S4.T3.1.2.1.8\">9.62</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.1.3.2.1\">Vox-Fusion</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.2.2\">8.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.2.3\">9.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.3.2.4.1\">7.42</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.2.5\">6.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.2.6\">12.13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.3.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.3.2.7.1\">5.51</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.1.3.2.8\">8.21</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.1.4.3.1\">ESLAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.3.2\">7.37</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.3.3\">8.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.3.4\">7.59</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.3.5\">6.45</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.4.3.6.1\">9.29</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.4.3.7\">5.65</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.1.4.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.4.3.8.1\">7.44</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.1.5.4.1\">Co-SLAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.4.2\">7.77</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.4.3\">12.52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.4.4\">9.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.4.5\">6.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.4.6\">12.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.5.4.7\">7.65</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.1.5.4.8\">9.34</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.1.6.5.1\">Point-SLAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.5.2\">10.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.6.5.3.1\">7.81</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.5.4\">8.65</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.5.5\">22.16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.5.6\">14.77</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.6.5.7\">9.54</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S4.T3.1.6.5.8\">12.19</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T3.1.7.6.1\">Ours</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.7.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.7.6.2.1\">7.08</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.7.6.3\">9.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.7.6.4\">8.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.7.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.7.6.5.1\">6.18</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.7.6.6\">10.21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.1.7.6.7\">7.19</td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_center ltx_border_bb\" id=\"S4.T3.1.7.6.8\">8.00</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 204 |
+
"capture": "TABLE III: Quantitative results of ATE RMSE (cm) on ScanNet [36]. "
|
| 205 |
+
},
|
| 206 |
+
"4": {
|
| 207 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>Quantitative results of ATE RMSE (cm) on TUM RGB-D<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.17765v3#bib.bib34\" title=\"\">34</a>]</cite>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T4.1.1.1.1\">Method</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.1.1.1.2\">fr1/desk(cm)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.1.1.1.3\">fr2/xyz(cm)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T4.1.1.1.4\">fr3/office(cm)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.1.2.2.1\">NICE-SLAM</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.2.2.2\">2.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.2.2.3\">1.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.2.2.4\">3.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T4.1.3.3.1\">ESLAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.3.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.3.3.2.1\">2.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.3.3.3\">N/A</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.3.3.4.1\">2.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T4.1.4.4.1\">Co-SLAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.4.2\">2.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.4.3\">1.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.4.4.4\">2.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T4.1.5.5.1\">Ours</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.5.2\">2.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.5.5.3.1\">1.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.5.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.5.5.4.1\">2.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T4.1.6.6.1\">BAD-SLAM</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.6.2\">1.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.6.3\">1.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.1.6.6.4\">1.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T4.1.7.7.1\">Kintinuous</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.7.2\">3.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.7.3\">2.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.7.7.4\">3.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.1.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T4.1.8.8.1\">ORB-SLAM2</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.1.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.8.8.2.1\">1.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.1.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.8.8.3.1\">0.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.1.8.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.1.8.8.4.1\">1.0</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 208 |
+
"capture": "TABLE IV: Quantitative results of ATE RMSE (cm) on TUM RGB-D[34]."
|
| 209 |
+
},
|
| 210 |
+
"5": {
|
| 211 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE V: </span>Quantitative results of ATE RMSE (cm) on Apartment <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.17765v3#bib.bib23\" title=\"\">23</a>]</cite> dataset.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T5.1.2.1.1\">Method</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.1.2.1.2\">NICE.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.1.2.1.3\">Vox.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.1.2.1.4\">ESLAM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.1.2.1.5\">Co.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T5.1.2.1.6\">Ours</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T5.1.1.1\">ATE RMSE (cm) \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T5.1.1.2\">5.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T5.1.1.3\">12.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T5.1.1.4\">N/A</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T5.1.1.5\">6.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T5.1.1.6\">6.97</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 212 |
+
"capture": "TABLE V: Quantitative results of ATE RMSE (cm) on Apartment [23] dataset."
|
| 213 |
+
},
|
| 214 |
+
"6": {
|
| 215 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE VI: </span>Run-time and memory comparison on Replica <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.17765v3#bib.bib35\" title=\"\">35</a>]</cite>, ScanNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.17765v3#bib.bib36\" title=\"\">36</a>]</cite>, and Apartment <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2403.17765v3#bib.bib23\" title=\"\">23</a>]</cite> scenes.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T6.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T6.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T6.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T6.1.1.1.2\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T6.1.1.1.3\">Speed FPT(s)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T6.1.1.1.4\"># Param.</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T6.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T6.1.2.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T6.1.2.1.1.1\">Replica</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T6.1.2.1.2\">ESLAM</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.1.2.1.3\">0.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.1.2.1.4\">6.85M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T6.1.3.2.1\">Co-SLAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.3.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.3.2.2.1\">0.12</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.3.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.3.2.3.1\">0.26M</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T6.1.4.3.1\">Ours</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.4.3.2\">0.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.4.3.3\">6.28M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T6.1.5.4.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T6.1.5.4.1.1\">ScanNet</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T6.1.5.4.2\">ESLAM</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.1.5.4.3\">0.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.1.5.4.4\">17.8M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T6.1.6.5.1\">Co-SLAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.6.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.6.5.2.1\">0.19</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.6.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.6.5.3.1\">1.59M</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T6.1.7.6.1\">Ours</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.7.6.2\">0.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.7.6.3\">10.73M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T6.1.8.7.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T6.1.8.7.1.1\">Apartment</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T6.1.8.7.2\">ESLAM</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.1.8.7.3\">2.40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.1.8.7.4\">22.1M</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.9.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T6.1.9.8.1\">Co-SLAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.9.8.2\">0.23</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T6.1.9.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.9.8.3.1\">1.59M</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.1.10.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T6.1.10.9.1\">Ours</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.1.10.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T6.1.10.9.2.1\">0.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T6.1.10.9.3\">12.38M</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 216 |
+
"capture": "TABLE VI: Run-time and memory comparison on Replica [35], ScanNet [36], and Apartment [23] scenes."
|
| 217 |
+
},
|
| 218 |
+
"7": {
|
| 219 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T7\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE VII: </span>Quantitative results of ablation study. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T7.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T7.5.6.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T7.5.6.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T7.5.6.1.1.1\" style=\"font-size:70%;\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"4\" id=\"S4.T7.5.6.1.2\"><span class=\"ltx_text\" id=\"S4.T7.5.6.1.2.1\" style=\"font-size:70%;\">Replica</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T7.5.6.1.3\"><span class=\"ltx_text\" id=\"S4.T7.5.6.1.3.1\" style=\"font-size:70%;\">ScanNet</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T7.1.1.1\">\n<span class=\"ltx_text\" id=\"S4.T7.1.1.1.1\" style=\"font-size:70%;\">Acc.</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T7.2.2.2\">\n<span class=\"ltx_text\" id=\"S4.T7.2.2.2.1\" style=\"font-size:70%;\">Comp.</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T7.3.3.3\">\n<span class=\"ltx_text\" id=\"S4.T7.3.3.3.1\" style=\"font-size:70%;\">Cp. Ratio</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T7.4.4.4\">\n<span class=\"ltx_text\" id=\"S4.T7.4.4.4.1\" style=\"font-size:70%;\">Depth L1</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T7.5.5.5\">\n<span class=\"ltx_text\" id=\"S4.T7.5.5.5.1\" style=\"font-size:70%;\">ATE</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T7.5.7.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T7.5.7.1.1\"><span class=\"ltx_text\" id=\"S4.T7.5.7.1.1.1\" style=\"font-size:70%;\">w/o multi-map</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.5.7.1.2\"><span class=\"ltx_text\" id=\"S4.T7.5.7.1.2.1\" style=\"font-size:70%;\">1.04</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.5.7.1.3\"><span class=\"ltx_text\" id=\"S4.T7.5.7.1.3.1\" style=\"font-size:70%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.5.7.1.4\"><span class=\"ltx_text\" id=\"S4.T7.5.7.1.4.1\" style=\"font-size:70%;\">99.34</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T7.5.7.1.5\"><span class=\"ltx_text\" id=\"S4.T7.5.7.1.5.1\" style=\"font-size:70%;\">1.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.5.7.1.6\"><span class=\"ltx_text\" id=\"S4.T7.5.7.1.6.1\" style=\"font-size:70%;\">9.64</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.5.8.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T7.5.8.2.1\"><span class=\"ltx_text\" id=\"S4.T7.5.8.2.1.1\" style=\"font-size:70%;\">w/o tri-plane</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.5.8.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.5.8.2.2.1\" style=\"font-size:70%;\">0.96</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.5.8.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.5.8.2.3.1\" style=\"font-size:70%;\">0.98</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.5.8.2.4\"><span class=\"ltx_text\" id=\"S4.T7.5.8.2.4.1\" style=\"font-size:70%;\">99.30</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T7.5.8.2.5\"><span class=\"ltx_text\" id=\"S4.T7.5.8.2.5.1\" style=\"font-size:70%;\">1.16</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.5.8.2.6\"><span class=\"ltx_text\" id=\"S4.T7.5.8.2.6.1\" style=\"font-size:70%;\">9.48</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.5.9.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T7.5.9.3.1\"><span class=\"ltx_text\" id=\"S4.T7.5.9.3.1.1\" style=\"font-size:70%;\">w/o global BA</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.5.9.3.2\"><span class=\"ltx_text\" id=\"S4.T7.5.9.3.2.1\" style=\"font-size:70%;\">1.06</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.5.9.3.3\"><span class=\"ltx_text\" id=\"S4.T7.5.9.3.3.1\" style=\"font-size:70%;\">1.08</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.5.9.3.4\"><span class=\"ltx_text\" id=\"S4.T7.5.9.3.4.1\" style=\"font-size:70%;\">99.17</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T7.5.9.3.5\"><span class=\"ltx_text\" id=\"S4.T7.5.9.3.5.1\" style=\"font-size:70%;\">1.27</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T7.5.9.3.6\"><span class=\"ltx_text\" id=\"S4.T7.5.9.3.6.1\" style=\"font-size:70%;\">12.06</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.5.10.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T7.5.10.4.1\"><span class=\"ltx_text\" id=\"S4.T7.5.10.4.1.1\" style=\"font-size:70%;\">ours full</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.5.10.4.2\"><span class=\"ltx_text\" id=\"S4.T7.5.10.4.2.1\" style=\"font-size:70%;\">1.03</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.5.10.4.3\"><span class=\"ltx_text\" id=\"S4.T7.5.10.4.3.1\" style=\"font-size:70%;\">1.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.5.10.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.5.10.4.4.1\" style=\"font-size:70%;\">99.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T7.5.10.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.5.10.4.5.1\" style=\"font-size:70%;\">0.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T7.5.10.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T7.5.10.4.6.1\" style=\"font-size:70%;\">8.00</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 220 |
+
"capture": "TABLE VII: Quantitative results of ablation study. "
|
| 221 |
+
},
|
| 222 |
+
"8": {
|
| 223 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T8\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE VIII: </span>Impact of ill-set scene boundary. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T8.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T8.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T8.5.5.6\"><span class=\"ltx_text\" id=\"S4.T8.5.5.6.1\" style=\"font-size:50%;\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T8.1.1.1\">\n<span class=\"ltx_text\" id=\"S4.T8.1.1.1.1\" style=\"font-size:50%;\">Depth L1 </span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T8.2.2.2\">\n<span class=\"ltx_text\" id=\"S4.T8.2.2.2.1\" style=\"font-size:50%;\">Acc. </span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T8.3.3.3\">\n<span class=\"ltx_text\" id=\"S4.T8.3.3.3.1\" style=\"font-size:50%;\">Comp. </span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S4.T8.4.4.4\">\n<span class=\"ltx_text\" id=\"S4.T8.4.4.4.1\" style=\"font-size:50%;\">Cp. Ratio </span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T8.5.5.5\">\n<span class=\"ltx_text\" id=\"S4.T8.5.5.5.1\" style=\"font-size:50%;\">ATE </span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T8.5.6.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T8.5.6.1.1\"><span class=\"ltx_text\" id=\"S4.T8.5.6.1.1.1\" style=\"font-size:50%;\">NICE-SLAM</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.5.6.1.2\"><span class=\"ltx_text\" id=\"S4.T8.5.6.1.2.1\" style=\"font-size:50%;\">N/A</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.5.6.1.3\"><span class=\"ltx_text\" id=\"S4.T8.5.6.1.3.1\" style=\"font-size:50%;\">N/A</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.5.6.1.4\"><span class=\"ltx_text\" id=\"S4.T8.5.6.1.4.1\" style=\"font-size:50%;\">N/A</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T8.5.6.1.5\"><span class=\"ltx_text\" id=\"S4.T8.5.6.1.5.1\" style=\"font-size:50%;\">N/A</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T8.5.6.1.6\"><span class=\"ltx_text\" id=\"S4.T8.5.6.1.6.1\" style=\"font-size:50%;\">N/A</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.5.7.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T8.5.7.2.1\"><span class=\"ltx_text\" id=\"S4.T8.5.7.2.1.1\" style=\"font-size:50%;\">Vox-Fusion (64*0.2m)</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.5.7.2.2\"><span class=\"ltx_text\" id=\"S4.T8.5.7.2.2.1\" style=\"font-size:50%;\">1.12 / 3.45</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.5.7.2.3\"><span class=\"ltx_text\" id=\"S4.T8.5.7.2.3.1\" style=\"font-size:50%;\">1.21 / 1.48</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.5.7.2.4\"><span class=\"ltx_text\" id=\"S4.T8.5.7.2.4.1\" style=\"font-size:50%;\">1.35 / 1.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T8.5.7.2.5\"><span class=\"ltx_text\" id=\"S4.T8.5.7.2.5.1\" style=\"font-size:50%;\">93.85 / 90.42</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.5.7.2.6\"><span class=\"ltx_text\" id=\"S4.T8.5.7.2.6.1\" style=\"font-size:50%;\">1.2 / 6.7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.5.8.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T8.5.8.3.1\"><span class=\"ltx_text\" id=\"S4.T8.5.8.3.1.1\" style=\"font-size:50%;\">ESLAM</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.5.8.3.2\"><span class=\"ltx_text\" id=\"S4.T8.5.8.3.2.1\" style=\"font-size:50%;\">0.95 / 49.74</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.5.8.3.3\"><span class=\"ltx_text\" id=\"S4.T8.5.8.3.3.1\" style=\"font-size:50%;\">1.04 / 1.11</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.5.8.3.4\"><span class=\"ltx_text\" id=\"S4.T8.5.8.3.4.1\" style=\"font-size:50%;\">1.03 / 33.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T8.5.8.3.5\"><span class=\"ltx_text\" id=\"S4.T8.5.8.3.5.1\" style=\"font-size:50%;\">99.71 / 72.04</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T8.5.8.3.6\"><span class=\"ltx_text\" id=\"S4.T8.5.8.3.6.1\" style=\"font-size:50%;\">0.67 / 9.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T8.5.9.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T8.5.9.4.1\"><span class=\"ltx_text\" id=\"S4.T8.5.9.4.1.1\" style=\"font-size:50%;\">Co-SLAM</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.5.9.4.2\"><span class=\"ltx_text\" id=\"S4.T8.5.9.4.2.1\" style=\"font-size:50%;\">1.47 / 52.09</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.5.9.4.3\"><span class=\"ltx_text\" id=\"S4.T8.5.9.4.3.1\" style=\"font-size:50%;\">1.04 / 1.24</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.5.9.4.4\"><span class=\"ltx_text\" id=\"S4.T8.5.9.4.4.1\" style=\"font-size:50%;\">1.06 / 31.47</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T8.5.9.4.5\"><span class=\"ltx_text\" id=\"S4.T8.5.9.4.5.1\" style=\"font-size:50%;\">99.45 / 72.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T8.5.9.4.6\"><span class=\"ltx_text\" id=\"S4.T8.5.9.4.6.1\" style=\"font-size:50%;\">0.63 / 0.64</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 224 |
+
"capture": "TABLE VIII: Impact of ill-set scene boundary. "
|
| 225 |
+
}
|
| 226 |
+
},
|
| 227 |
+
"image_paths": {
|
| 228 |
+
"1": {
|
| 229 |
+
"figure_path": "2403.17765v3_figure_1.png",
|
| 230 |
+
"caption": "Figure 1: Our MUTE-SLAM system demonstrates rapid and accurate tracking and mapping across indoor environments of varying scales without pre-defined boundaries. We depict the trajectories and meshes of both a small and a large scenario: estimated trajectories are marked in blue, while ground truths are in green. The left image is an around-a-desk scene from the TUM-RGBD dataset [34], while the image on the right is\na multiple-room scene from Apartment dataset provided by NICE-SLAM [23].",
|
| 231 |
+
"url": "http://arxiv.org/html/2403.17765v3/extracted/5869755/first.png"
|
| 232 |
+
},
|
| 233 |
+
"2": {
|
| 234 |
+
"figure_path": "2403.17765v3_figure_2.png",
|
| 235 |
+
"caption": "Figure 2: The overview of MUTE-SLAM.Our method consists of three parts. 1)Scene representation: the whole scene is represented by several sub-maps created on the fly. Each sub-map is formulated by double tri-plane hash-encoders, one for TSDF and the other for color encoding. 2)Tracking: this module optimizes the pose for each frame through differentiable rendering. 3)Mapping: the mapping module dynamically allocates new sub-maps with a tracked pose. It conducts a joint optimization of both scene and pose parameters, utilizing the current frame along with co-visible keyframes. 4)Bundle Adjustment: by sampling keyframes globally, this module further refines all trainable parameters and ensures global consistency.",
|
| 236 |
+
"url": "http://arxiv.org/html/2403.17765v3/x1.png"
|
| 237 |
+
},
|
| 238 |
+
"3": {
|
| 239 |
+
"figure_path": "2403.17765v3_figure_3.png",
|
| 240 |
+
"caption": "Figure 3: Qualitative reconstruction results on Replica.",
|
| 241 |
+
"url": "http://arxiv.org/html/2403.17765v3/x2.png"
|
| 242 |
+
},
|
| 243 |
+
"4": {
|
| 244 |
+
"figure_path": "2403.17765v3_figure_4.png",
|
| 245 |
+
"caption": "Figure 4: Qualitative reconstruction results on ScanNet [36]. Our reconstructed mesh achieves better completion and fewer artifacts compared to ESLAM [28]. Additionally, our method produces sharper and more detailed geometry than Co-SLAM [24].",
|
| 246 |
+
"url": "http://arxiv.org/html/2403.17765v3/extracted/5869755/scan.png"
|
| 247 |
+
},
|
| 248 |
+
"5": {
|
| 249 |
+
"figure_path": "2403.17765v3_figure_5.png",
|
| 250 |
+
"caption": "Figure 5: Qualitative comparison of our method employing tri-plane hash-encoding versus without it, using reconstructed meshes from Replica [35] scenes.The left-most images illustrate how hash collisions can result in rough surfaces and low-quality textures in flat areas like walls and windows. Our tri-plane approach significantly mitigates these issues, achieving better results even with smaller hash tables. The other two images further show that our design leaves fewer artifacts in unobserved regions.",
|
| 251 |
+
"url": "http://arxiv.org/html/2403.17765v3/extracted/5869755/abla.png"
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
"validation": true,
|
| 255 |
+
"references": [],
|
| 256 |
+
"url": "http://arxiv.org/html/2403.17765v3"
|
| 257 |
+
}
|
20240921/2404.02180v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2404.04838v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2404.08368v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2405.17520v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2406.03822v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2406.05766v2.json
ADDED
|
@@ -0,0 +1,592 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Set-CLIP: Exploring Aligned Semantic From Low-Alignment Multimodal Data Through A Distribution View",
|
| 3 |
+
"abstract": "Multimodal fusion breaks through the boundaries between diverse modalities and has already achieved notable performances. However, in many specialized fields, it is struggling to obtain sufficient alignment data for training, which seriously limits the use of previously effective models. Therefore, semi-supervised learning approaches are attempted to facilitate multimodal alignment by learning from low-alignment data with fewer matched pairs, but traditional techniques like pseudo-labeling may run into troubles in the label-deficient scenarios. To tackle these challenges, we reframe semi-supervised multimodal alignment as a manifold matching issue and propose a new methodology based on CLIP, termed Set-CLIP. Specifically, by designing a novel semantic density distribution loss, we constrain the latent representation distribution with fine granularity and extract implicit semantic alignment from unpaired multimodal data, thereby reducing the reliance on numerous strictly matched pairs. Furthermore, we apply coarse-grained modality adaptation and unimodal self-supervised guidance to narrow the gaps between modality spaces and improve the stability of representation distributions. Extensive experiments conducted on a range of tasks in various fields, including protein analysis, remote sensing, and the general vision-language field, validate the efficacy of our proposed Set-CLIP method. Especially with no paired data for supervised training, Set-CLIP is still outstanding, which brings an improvement of over CLIP.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "As a pivotal foundation for numerous tasks(Rombach et al. 2022 ###reference_b38###; Zhou et al. 2023 ###reference_b55###), multimodal learning has become the focus of many research(Rasekh et al. 2024 ###reference_b37###; Bhalla et al. 2024 ###reference_b2###). By integrating information from diverse modalities such as texts, images and more, multimodal models can derive more comprehensive information to enhance the generalization of the learned representations(Gao et al. 2020 ###reference_b12###). Meanwhile, such fusion enables networks to emulate human-like multiple perceptual capabilities and address the inherent challenges like data scarcity, noise and ambiguity in various domains, from computer vision to healthcare(Wang et al. 2022 ###reference_b43###; Zang et al. 2024 ###reference_b49###).\nTo better harness latent alignment information, previous studies have mainly concentrated on developing frameworks and pretraining objectives to enhance multimodal understanding. Result from the sufficient developments of images and texts, many studies have made great progress in the vision-language field(Radford et al. 2021 ###reference_b36###; Chen et al. 2022 ###reference_b6###; Kim, Son, and Kim 2021 ###reference_b22###).Thereinto, CLIP employs a contrastive pretraining task on large-scale datasets and gets robust multimodal representation. Due to effective framework and general pretraining task, it has good portability and competitive performance compared with supervised methods, hence it becomes the baseline for various vision-language works(Li et al. 2023 ###reference_b28###). Besides the aforesaid traditional field, in other intersecting domains, great breakthroughs have been also made by applying CLIP. EchoCLIP(Christensen et al. 2024 ###reference_b7###) improves the performance of cardiac imaging models by correlating ultrasound images with expert texts while ProtST(Xu et al. 2023a ###reference_b46###) capture more protein function information by aligning protein sequences and textual property descriptions. Moreover, in the field of zero-shot video recognition, Open-VCLIP(Weng et al. 2023 ###reference_b44###) also shows excellent performance by leveraging the similar paradigm, which proves the powerful effects of CLIP.\n###figure_1### ###figure_2### ###figure_3### Nonetheless, there are still many specialized fields where it is usually difficult to obtain sufficient alignment data(Li et al. 2022a ###reference_b24###) while traditional multimodal models like CLIP can only learn from matched pairs, which greatly limites the performance of previously elaborate models. In order to break above dilemma, several studies paid attention to these specialized fields and attempted to learn from low-alignment data with fewer matched pairs for pretraining(Zhou et al. 2022 ###reference_b54###; Mu et al. 2022 ###reference_b34###). The main idea is to modify the loss function of CLIP and apply semi-supervised learning method(Yang et al. 2022 ###reference_b48###) to explore the latent alignment information from unlabeled data. Recently, a new research improves the original CLIP and proposes S-CLIP(Mo et al. 2023 ###reference_b33###) which introduces two novel pseudo-labeling losses for unlabeled images and achieve state-of-the-art in various specialized semi-supervised vision-language fields.\nHowever, pseudo-labeling methods may only be limited to the fields with class information and have difficulties scaling to other specialized multimodal domains where pseudo-labels are struggling to obtain. Meanwhile, the knowledge of generating pseudo-labels only relies on the insufficient labeled data, which leads to narrow ken and may loss much potential alignment information. In addition, the quality of pseudo-label has a great impact on the final performance so the learning process is unstable and even negative(Arazo et al. 2020 ###reference_b1###). In order to solve these problems, it is necessary to design new semi-supervised methods for multimodalities, which can capture latent alignment information in unpaired data and be well extended to various multimodal domains.\nTherefore, we propose a novel semi-supervised learning method for multimodal alignment based on CLIP, named as Set-CLIP. We believe that ultimate representation is composed of modality, structure as well as semantic and the key to multimodal alignment is to capture the same semantic representation while ignoring the other two parts. On the premise of two aligned modal data with the same semantic distribution, we design a new pretraining task based on manifold matching and a novel loss called semantic density distribution(SDD) to better concentrate on the implicit alignment among vast unpaired multimodal data. Moreover, we introduce multi-kernel maximum mean discrepancy(MK-MMD) to eliminate the gap between modality representations while self-supervised contrastive loss is used to prevent mode collapse and enhance the robustness of semantic representation. At the same time, we apply contrastive loss from CLIP on the matched multimodal pairs to keep the correct learning direction. Set-CLIP tries to explore alignment relationship in latent space and it can be extended to various multimodal domains due to task irrelevance. Through end-to-end learning, the mutual constraints between losses prevent negative optimization and implicitly expand the knowledge range. Our approach can be transferred to different multimodal frameworks(Li et al. 2022b ###reference_b25###, c ###reference_b27###) and the comparison of Set-CLIP with other strategies is shown in Figure 1 ###reference_###.\nIn short, our contributions are summarized as follows:\n(1) We contribute a groundbreaking perspective for the semi-supervised multimodal alignment problem by reframing it as a manifold matching problem, which brings a new pathway to exploit the implicit alignment information in the rich, yet largely unmatched multimodal data.\n(2) We design a novel semantic density distribution loss with fine-grained constrain and it can be applied in various specialized fields as well as different multimodal frameworks. We introduce other objectives based on theoretical analysis about the components of representation and propose Set-CLIP to realize multimodal alignment with less supervised pairs. Moreover, our method can be applied to other domains with two-stream networks(Cao, Lu, and Zhang 2024 ###reference_b3###), such as knowledge distillation(Pham et al. 2022 ###reference_b35###), self-supervised learning(He et al. 2020 ###reference_b16###) and domain adaptation.\n(3) We conduct extensive experiments in various fields and prove the advantages of Set-CLIP. Moreover, We also explain the effects of key modules and provide a feasible usage paradigm for the specialized fields with limited supervised pairs."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Works",
|
| 15 |
+
"text": "Multimodal alignment.\nMultimodality enhances understanding and decision-making by integrating information from multiple sensorymodalities(Martin et al. 2022 ###reference_b32###). Thereinto, ALBEF (Li et al. 2021 ###reference_b26###) aligns visual and language representations, using momentum distillation to improve multimodal embeddings. FLAVA (Singh et al. 2022 ###reference_b40###) enhances multitask and cross-modal learning by jointly pretraining text and images. ALIGN (Jia et al. 2021 ###reference_b21###) jointly trains language and image encoders, significantly enhancing performance across various vision and text benchmarks. In recent years, researches around CLIP has further optimized computational efficiency and model representation capabilities.\nFor instance, FLIP(Li et al. 2023 ###reference_b28###) brings lower computation and faster training times by randomly removing a large number of image patches during training process while SoftCLIP(Gao et al. 2024 ###reference_b14###) applies fine-grained interior self-similarity as a softening target to alleviate the strict mutual exclusion problem. Moreover, latent diffusion models generates reliable text embeddings as condition by using pretrained text encoder of CLIP and CLIPSelf (Wu et al. 2023 ###reference_b45###) enhances region-level representation through self-distillation from CLIP\u2019s image encoder, which proves the powerful effects of CLIP.\nSemi-supervised learning.\nSemi-supervised learning (Van Engelen and Hoos 2020 ###reference_b41###) uses both labeled and unlabeled data to improve training process, encompassing strategies like pseudo-labeling (Cascante-Bonilla et al. 2021 ###reference_b4###), where models self-label their training data, and self-supervised learning (Krishnan, Rajpurkar, and Topol 2022 ###reference_b23###; Liu et al. 2022 ###reference_b30###), which explores the values in data itself. vONTSS (Xu et al. 2023b ###reference_b47###) utilizes the von Mises-Fisher distribution and optimal transport for semi-supervised neural topic modeling to improve topic extraction in text datasets. SSGD (Zhou, Loy, and Liu 2023 ###reference_b53###) proposes a new semi-supervised domain generalization method that enhances model robustness under domain shifts through stochastic modeling and style augmentation. SS-ORL (Zheng et al. 2023b ###reference_b52###) employs a semi-supervised offline reinforcement learning approach, improving learning outcomes by utilizing unlabeled trajectories and limited complete action data. Semi-supervised learning ensures performance with fewer samples, but in the specialized domains, how to achieve complementarity and integration across different modalities with limited paired data remains an issue that needs attention and resolution(Zong, Mac Aodha, and Hospedales 2024 ###reference_b56###)."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Method",
|
| 21 |
+
"text": ""
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Problem Description and Assumption",
|
| 27 |
+
"text": "Different from the general vision-language field, there could be only limited available matched pairs between specific associated modalities while it is relatively simple to get a large amount of unimodal data with similar semantic distribution. Therefore, we propose Set-CLIP, which uses massive unmatched data as well as limited matched pairs to realize more generalized alignment through semi-supervised learning. Formally, for any two modalities and , we employ a small number of matched pairs and a large number of unmatched data as well as to train our model. Through sampling respectively from two unpaired sets, we acquire and as unsupervised training data which are only considered to have similar semantic distribution rather than strict one-to-one matching. Based on a natural assumption, models can be trained on these adequate unmatched multimodal data.\nAssumption 1 (Semantic Distribution Similarity Assumption, SDSA). We suppose that the latent embedding is a combination representation of modality, structure as well as semantic and more detailed analysis will be displayed in Appendix A. The goal of multimodal alignment is to find the same semantic representation and get rid of the interference from the other two representations. If the overall semantic distributions of and are similar, we can find a embedding space where and are the embedding representations respectively from and . When the density distributions of as and as are similar, this space is the semantic embedding space of and . Consequently, when datasets from two modalities have the similar semantic distribution and their volumes are large enough, we can find the aligned semantic space by narrowing the gap between the density distribution from two modalities rather than strict matching relationship or pseudo-labeling method. Through the above assumption, we can explore the value from unpaired data and the semi-supervised multimodal alignment can be transformed into a manifold matching problem."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Framework of Set-CLIP",
|
| 33 |
+
"text": "Figure 2 ###reference_### introduces the conceptual overview of Set-CLIP. Due to the convenience and efficiency of CLIP, our proposed method follows to design the two-stream network. Each stream includes an encoder network and a projection head network , which are applied to map the data from original space into embedding space. The network from different streams adopt different backbone and is trained from scratch. We introduce MK-MMD as well as self-supervised contrastive loss(SSL) and design a novel semantic density distribution loss(SDD) to learn potential alignment in large amounts of unpaired data. Through contrastive metrix, we apply contrastive loss(CL) on limited supervised pairs to guarantee proper optimization. The multimodal batch with the size of is composed of paired data from while the rest data is sampling from two unsupervised training datasets. A detailed description of loss functions will be shown below.\n###figure_4###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Objective Loss",
|
| 39 |
+
"text": "Coarse-Grained Modality Adaptation:\nThere may be large discrepancy between the embedding distributions from different modalities. While the latent representations are supposed to have the similar distribution space when they become aligned, which can be treated as a domain adaptation problem. Thereinto, MK-MMD(Long et al. 2015 ###reference_b31###) is used to measure the gap between two probability distributions and while the core idea of this method is that samples as well as drawn from and should keep similar statistical properties if the two distributions are the same. Specifically, MK-MMD maps the data from original space to Reproducing Kernel Hilbert Spac(RKHS) by kernel functions(Roth and Steinhage 1999 ###reference_b39###) and we can compare the difference between distributions in this space. Through linear combination of multiple kernel functions, we could get a more robust mapping function to RKHS where we can easily distinguish two distributions even though they are similar in original space. The formula is shown as follows:\nwhere is batch size while and are latent representations from two modalities. is RKHS induced by kernel function and is implicit function used to map the original space data to . For multi-kernel cases, kernel function is a linear combination of basic kernel functions and the format is . learnable kernel weight is obtained through optimization to effectively represent differences between distributions. In our method, equals to while we choose Gaussian Kernel and Polynomial Kernel as basic kernel function.\nFine-grained Semantic Distribution Alignment:\nSince MK-MMD pays attaention to the whole distribution rather than sample level so it is imprecise and can only achieve macro alignment which is not enough for representation alignment. Consequently, we propose a novel objective named as semantic density distribution loss(SDD) to explore more fine-grained information from unpaired data and realize more refined alignment. SDD is inspired from the perspective of probability density distribution estimation, hence it could keep an eye on specific sample representation while take the whole semantic distribution alignment into consideration at the same time. The formula is shown as follows:\nwhere works on the embedding space to measure the difference between two representation distributions more accurately in a symmetrical way and the models are trained to minimize the loss value to realize latent semantic alignment. and denotes embedding distributions and the format of is shown as follows:\nhere for generality and convenience, we define three intermediate variables, while are sets composed of latent represenations and denotes the latent representation of a sample. is the size of batch which is a combination of matched pairs and unmatched data and Kullback-Leibler divergence is introduced to measure the dis-similarity between the density values of a specific sample from two distributions. The format of is displayed in the following formula.\nhere we apply exponential function as probability density function and denotes bandwidth used to control the smoothness. denotes the variance of distribution and the format is shown as follows.\nwhere is the sample from set and we apply sample variance with Bessel\u2019s Correction. can lead model to focus on narrowing the gap between semantic distributions while avoid close cluster. By employing , semantic aligned data from different modalities will get similar density distribution in the latent space during training. Meanwhile, the time complexity of is which is the same as . More details of SDD will show in Appendix B.\nSupervised Alignment Guidance:\nBased on problem description, there are limited matched pairs and a large number of unmatched data . Due to the lack of sufficient data, we are supposed to learn generalized representations through unsupervised data and take advantage of explicit alignment relationship as ground-truth to achieve precise alignment. We apply contrastive loss in CLIP, which is to maximize the representation similarity between matched pairs while minimize the similarity between negative pairs. The format of this loss is shown as follows:\nwhere denotes the paired size in a batch and is generally . and are representations in latent space respectively from two modalities. is a learnable temperature parameter and denotes cosine similarity. We expect to apply supervised as well as unsupervised data in every batch to jointly train the model due to the reason that a mass of unsupervised data can bring richer alignment information while matched pairs could lead to more accurate learning.\nSelf-supervised Distribution Stability:\nRely on self-supervised contrastive loss(SSL)(Chen et al. 2020 ###reference_b5###; Gao, Yao, and Chen 2021 ###reference_b13###), we can adequately find out implicit information from single modality and get robust feature representation. In the field of multimodal alignment with limit matched pairs, we find that it is essential to apply this objective because it can pull away the representations of different samples in the latent space with incomplete alignment guidance. In other words, if SSL is not employed, the data without alignment constraint may gather into a tight cluster. To be specific, we apply augmentation to generate positive pairs and the format of is displayed as follows:\nwhere is latent embedding and denotes the representation of corresponding positive sample. In our method, each modality is supposed to apply this loss while is calculated through cosine similarity. We denote and as practical latent representation respectively from different two modalities and the corresponding objectives are named as as well as . According to the above constraint, we propose new loss named as and the formula is shown as follows:\nwhere is a hyperparameter and is used to guide the training process with accurate supervised alignment information rather than semantic distribution similarity, which is necessary for avoiding negative optimization. Meanwhile, if the data from different modalities can achieve augmentation according to the common semantics rather than the pattern in the single modality, the performance of related method may realize further growth(Huh et al. 2024 ###reference_b20###).\nThe Overall Pretraining Objective:\nOur method aims to adopt matched pairs as well as unsupervised data in a batch at the same time. In this way, during the pretraining process, we can utilize comprehensive unsupervised data as well as the alignment constraint from matched pairs to realize robust and stable optimization process. Moreover, through semantic distribution alignment, the knowledge learned from unsupervised data and matched pairs can potentially interact with each other which could enlarge the range of knowledge. For overall pretraining objective, we seek to minimize the loss functions of all pretraining tasks simultaneously:\nwhere denotes all learnable parameters in encoder and projection head networks. , and are hyperparameters used to control the impacts of different pretraining tasks."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Experiments",
|
| 45 |
+
"text": "In order to evaluate the effectiveness of proposed method, we conduct extensive experiments in various fields, including protein representation, remote sensing as well as general vision-language field. In addition, we design sufficient ablation experiments to analyze the roles of key modules."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "Quantitative Analysis About Sampling Size",
|
| 51 |
+
"text": "As mentioned above, there exists implicit alignment information between different modalities with similar semantic distribution even if there is no definite matched pairs. Therefore, if we can acquire unimodal batches which reflect the real distribution of original data by stochastic sampling in each modality, it is derivable that each batch from different modalities also keeps similar semantic distribution and can be used for subsequent training process. Obviously, sampling size significantly influence the ability whether batches are on behalf of original distributions. Hence We attempt to quantitatively analyze the ability of different scales of sampling size for representing the original distribution, which can guide to choose the proper size. By applying soft Parzen-window method, we can calculate the representing confidence of sample batch with given size. Through experimental verification, it could be concluded that sample batches will be able to represent original complicated distribution effectively when sampling size is over . Furthermore, if different modal data is from the same semantic distribution, the batches with sampling size over will also keep the similar semantic distribution. More Detailed process of method and relevant analysis will be displayed in Appendix E."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Evaluation On Single Protein Function Prediction",
|
| 57 |
+
"text": "Overview of tasks and training setup:\nTo examine the efficacy of Set-CLIP in non-vision-language multimodal domains with insufficient alignment data, we conduct experiments in the protein representation field. Proteins can be defined using a multi-level structure and most previous works take aligned sequence and structure as input for single-stream network to capture the invariance features(Hermosilla et al. 2020 ###reference_b18###). Due to limited aligned data, these intricately designed models struggle into trouble. Following (Zheng et al. 2023a ###reference_b51###), we consider sequence and structure as two modalities and apply Set-CLIP to realize multimodal fusion by pulling semantic distributions closer from extensive unsupervised data. Structure encoder is designed based on CDConv(Fan et al. 2022 ###reference_b11###) while ESM-2(Lin et al. 2022 ###reference_b29###) is selected as sequence encoder. We adopt CATH dataset for pretraining and this process lasts epochs. According to the same settings in (Fan et al. 2022 ###reference_b11###), we evaluate the proposed method on the following four tasks: protein fold classification, enzyme reaction classification, gene ontology (GO) term prediction and enzyme commission (EC) number prediction(Gligorijevi\u0107 et al. 2021 ###reference_b15###). More details of this experiment is shown in Appendix F and D.\nResults:\nThe performance of downstream tasks are shown in Table 1 ###reference_### while results of previous approaches are from (Fan et al. 2022 ###reference_b11###; Zhang et al. 2023 ###reference_b50###; Xu et al. 2023a ###reference_b46###). Thereinto, CLIP(1/2 CATH) is a two-stream network with 50% CATH data for pretraining while CLIP(CATH) is pretrained on the whole CATH datasets. Set-CLIP(Ours) adopts 50% CATH data as supervised pairs while the rest are considered as unlabeled data. Moreover, we add an Average item to evaluate the overall performance. We first verify the effect of two-stream network compared to single-stream model and the results are displayed at the last line in Table 1 ###reference_###. We can find that CLIP achieve better results at most downstream tasks and show superiority especially in EC number prediction. Further, it is obvious that Set-CLIP dramatically narrow the overall gap between CLIP(CATH) and the performance is even better at a few downstream tasks. This may be due to the fact that Set-CLIP can explore implicit alignment through the fine-grained semantic distribution constraint of SDD while CLIP only focuses on local representation match which may loss some global distribution information."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.3",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Evaluation On Remote Sensing Datasets",
|
| 63 |
+
"text": "Overview of tasks and training setup:\nThe models in remote sensing field can acquire comprehensive knowledge by jointly learning satellite images and corresponding captions. However, the training datasets are usually composed of web-crawled data and annotating captions may also need various expert knowledge, which can be expensive and time-consuming. So it is essential to evaluate the performance of Set-CLIP on limited matched pairs which is hard to tackle by traditional methods. Following (Mo et al. 2023 ###reference_b33###), Set-CLIP is pretrained on the union of RSICD, UCM and Sydney with zero-shot classification and image-text retrieval as downstream tasks. ResNet(He et al. 2016 ###reference_b17###) and transformer(Vaswani et al. 2017 ###reference_b42###) are chosen as encoders and Set-CLIP is pretrained for epochs. We subsample of image-text pairs for supervised learning while the remaining data is served unlabeled but conform to the same semantic distribution. Similarly, Top-1 classification accuracy is used to evaluate the performance on zero-shot classification while recall is applied for image-text retrieval tasks. More Details will be presented in the Appendix C and D.\n\n###figure_5### Results:\nTable 2 ###reference_### displays the results of zero-shot classification and the first five lines are from (Mo et al. 2023 ###reference_b33###). In this experiment, Set-CLIP is designed based on S-CLIP and trained to narrow the embedding distribution gap between unlabeled images and texts under the guidance of SDD. The whole distributions of batches from different modalities may keep similar even though the explicit matching relationship is unknown between specific samples. For zero-shot classification, our method shows outstanding performance except RSSCN7. We can also find that existing methods are all difficult to bring much gain compared with other datasets, so it is believed that RSSCN7 may have significant gap with training set resulting in greater difficulty for inference. As shown in Appendix G, Set-CLIP consistently improves the results in image-text retrieval, which proves that less distribution gap between unlabeled images and texts brings robust pseudo-labels and stable distribution structure."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.4",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Evaluation On General Vision-Language Retrieval",
|
| 69 |
+
"text": "Overview of tasks and training setup:\nBesides above specialized domains, we also evaluate the performance of Set-CLIP in general vision-language field. Experiments are carried on Flickr-8k(Hodosh, Young, and Hockenmaier 2013 ###reference_b19###) and Mini COCO while image-text retrieval is adopted as downstream task. Moreover, the vision encoder employs ResNet-50 or ViT-32(Dosovitskiy et al. 2020 ###reference_b9###) and BERT(Devlin et al. 2018 ###reference_b8###) acts as the text encoder. We choose the first description of each image as corresponding caption while dropout ratio is set to for augmentation following (Gao, Yao, and Chen 2021 ###reference_b13###). Pretraining process lasts epochs and batch size is with learning rate equalling to . Furthermore, multimodal data is considered to be matched while the rest data is unlabeled. Datailed statements of datasets will be displayed in Appendix C.\nResults:\nWe evaluate the performances with different models as well as datasets and the results is shown in Table 3 ###reference_###. CLIP(1/3) is trained by 1/3 dataset while CLIP(1) learns alignment on the whole dataset. Set-CLIP employs 1/3 matched data for supervised learning and enlarge knowledge range from rest unpaired data. For better observing effects of different strategies, we adopt the improvement value as yardstick and baseline is CLIP(1/3). We can find that Set-CLIP continuously brings gains regardless of settings and the overall performance of VIT is better than ResNet for given datasets. From the experimental results, we may also conclude that multimodal fusion could be simple when different modality models have similar distance measurement.\n2613\n2613\n2613\n2613"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.5",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Ablation",
|
| 75 |
+
"text": "Set-CLIP applies different kinds of objectives to explore implicit semantic alignment from low-aligned multimodal data and shows excellent performances in various experiments. Then, we will further analyze the internal mechanism by replying to the following noteworthy questions.\nQ1: How will SDD and SSL influence the final effectiveness? To answer this question, we conduct experiments in protein representation field and results are shown in Table 4 ###reference_###. We can find that Set-CLIP trained with both objectives shows better performance except superfamily classification while the model only trained by SDD outperforms at this item but fall into negative optimization in EC number prediction. Single SSL brings weak improvement compared with the baseline so it also acquires additional alignment information during pretraining process. Through above results, we believe that the combination of these two objectives brings more advantages rather than simple stack of respective effect, in other words, these two losses interact and depend on each other. Specifically, SDD can exploit alignment information in unsupervised data but may cause mode collapse(Du et al. 2023 ###reference_b10###) and negative learning as shown at the third line. While adding SSL can further constrain the optimization direction and increase the stability of overall distribution. However, SSL will also affect the latent distribution of similar samples, hence it is necessary to balance the relationship to achieve better performance(Huh et al. 2024 ###reference_b20###).\n2613\n2613\n2613\n2613\nQ2: How is the performance if we change some modules of SDD?\nWith fine-grained distribution similarity measurement, SDD plays a critical role in semi-supervised fusion. So it is essential to deconstruct SDD and analyse which settings may lead to better effects. We make retrieval experiment on Flickr-8k with Top-3 recall and Table 5 ###reference_### display the results. Relative distance(RD) in eq. 4 ###reference_### can eliminate the indistinguishability in tight latent cluster compared to absolute distance while Kullback-Leibler Divergence(KL) in eq. 3 ###reference_### may be more suitable for distribution contrast with MSE. It is clear that models achieve the greatest performance when adding RD and KL simultaneously. Figure 3 ###reference_### shows retrieval results of CLIP and Set-CLIP with no paired data for supervised training and it is obvious our designed objectives can still guide to explore alignment in pairing scarcity scenario, especially bring improvement of with Top-1 recall. Other ablation results will be shown in Appendix I."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Conclusion",
|
| 81 |
+
"text": "We reframe semi-supervised multimodal alignment as manifold matching issue and propose a new method named as Set-CLIP. Based on the data itself, we design novel pretraining tasks to explore latent alignment from unpaired multimodal data in a fine-grained manner. Through extensive experiments across various fields, we demonstrate the superiority of our method to realize rubost generalization, which provides a possible way for multimodal fusion in specialized domains with insufficient aligned data."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "6",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Reproducibility Checklist",
|
| 87 |
+
"text": "This paper:\nIncludes a conceptual outline and/or pseudocode description of AI methods introduced (yes)\nClearly delineates statements that are opinions, hypothesis, and speculation from objective fact and results (yes)\nProvides well marked pedagogical references for less-familiar readers to gain background necessary to replicate the paper (yes)\nDoes this paper make theoretical contributions? (yes)\nIf yes, please complete the list below.\nAll assumptions and restrictions are stated clearly and formally. (yes)\nAll novel claims are stated formally (e.g., in theorem statements). (yes)\nProofs of all novel claims are included. (yes)\nProof sketches or intuitions are given for complex and/or novel results. (yes)\nAppropriate citations to theoretical tools used are given. (yes)\nAll theoretical claims are demonstrated empirically to hold. (yes)\nAll experimental code used to eliminate or disprove claims is included. (yes)\nDoes this paper rely on one or more datasets? (yes)\nIf yes, please complete the list below.\nA motivation is given for why the experiments are conducted on the selected datasets (yes)\nAll novel datasets introduced in this paper are included in a data appendix. (yes)\nAll novel datasets introduced in this paper will be made publicly available upon publication of the paper with a license that allows free usage for research purposes. (yes)\nAll datasets drawn from the existing literature (potentially including authors\u2019 own previously published work) are accompanied by appropriate citations. (yes)\nAll datasets drawn from the existing literature (potentially including authors\u2019 own previously published work) are publicly available. (yes)\nAll datasets that are not publicly available are described in detail, with explanation why publicly available alternatives are not scientifically satisfying. (yes)\nDoes this paper include computational experiments? (yes)\nIf yes, please complete the list below.\nAny code required for pre-processing data is included in the appendix. (yes)\nAll source code required for conducting and analyzing the experiments is included in a code appendix. (yes)\nAll source code implementing new methods have comments detailing the implementation, with references to the paper where each step comes from (yes)\nIf an algorithm depends on randomness, then the method used for setting seeds is described in a way sufficient to allow replication of results. (yes)\nThis paper specifies the computing infrastructure used for running experiments (hardware and software), including GPU/CPU models; amount of memory; operating system; names and versions of relevant software libraries and frameworks. (yes)\nThis paper formally describes evaluation metrics used and explains the motivation for choosing these metrics. (yes)\nThis paper states the number of algorithm runs used to compute each reported result. (yes)\nAnalysis of experiments goes beyond single-dimensional summaries of performance (e.g., average, median) to include measures of variation, confidence, or other distributional information. (yes)\nThe significance of any improvement or decrease in performance is judged using appropriate statistical tests (e.g., Wilcoxon signed-rank). (yes)\nThis paper lists all final hyper-parameters used for each model/algorithm in the paper\u2019s experiments. (yes)\nThis paper states the number and range of values tried per (hyper-) parameter during development of the paper, along with the criterion used for selecting the final parameter setting. (yes)"
|
| 88 |
+
}
|
| 89 |
+
],
|
| 90 |
+
"appendix": [],
|
| 91 |
+
"tables": {
|
| 92 |
+
"1": {
|
| 93 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"Sx4.T1.1\" style=\"width:252.5pt;height:272.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-15.6pt,16.8pt) scale(0.89,0.89) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T1.1.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"Sx4.T1.1.1.1.1.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"Sx4.T1.1.1.1.2\">Gene Ontology</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T1.1.1.1.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"Sx4.T1.1.1.1.3.1\">EC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T1.1.1.1.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"Sx4.T1.1.1.1.4.1\">Average</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.2.1\">BP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.2.2\">MF</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.2.3\">CC</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.3.1\">ResNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.3.2\">0.280</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.3.3\">0.405</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.3.4\">0.304</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.3.5\">0.605</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.3.6\">0.399</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.4.1\">ProtBert</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.4.2\">0.279</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.4.3\">0.456</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.4.4\">0.408</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.4.5\">0.838</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.4.6\">0.495</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.5.1\">OntoProtein</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.5.2\">0.436</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.5.3\">0.631</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.5.4\">0.441</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.5.5\">0.841</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.5.6\">0.587</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.6.1\">ESM-1b</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.6.2\">0.452</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.6.3\">0.659</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.6.4\">0.477</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.6.5\">0.869</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.6.6\">0.614</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.7.1\">ESM-2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.7.2.1\">0.472</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.7.3\">0.662</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.7.4\">0.472</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.7.5\">0.874</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.7.6\">0.620</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.8.1\">GraphQA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.8.2\">0.308</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.8.3\">0.329</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.8.4\">0.413</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.8.5\">0.509</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.8.6\">0.389</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.9.1\">GVP</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.9.2\">0.326</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.9.3\">0.426</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.9.4\">0.420</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.9.5\">0.489</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.9.6\">0.415</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.10.1\">DeepFRI</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.10.2\">0.399</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.10.3\">0.465</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.10.4\">0.460</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.10.5\">0.631</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.10.6\">0.489</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.11.1\">GearNet</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.11.2\">0.356</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.11.3\">0.503</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.11.4\">0.414</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.11.5\">0.730</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.11.6\">0.501</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.12.1\">New IEConv</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.12.2\">0.374</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.12.3\">0.544</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.12.4\">0.444</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.12.5\">0.735</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.12.6\">0.524</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.13.1\">GearNet-Edge</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.13.2\">0.403</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.13.3\">0.580</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.13.4\">0.450</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.13.5\">0.810</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.13.6\">0.561</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.14.1\">CDConv</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.14.2\">0.453</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.14.3\">0.654</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.14.4\">0.479</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.14.5\">0.820</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.14.6\">0.602</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.15.1\">CLIP(1/2 CATH)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.15.2\">0.456</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.15.3\">0.661</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.15.4\">0.485</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.15.5\">0.881</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.1.15.6\">0.621</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.16.1\">Set-CLIP(Ours)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.16.2\">0.459</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.16.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.16.3.1\">0.667</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.16.4\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"Sx4.T1.1.1.16.4.1\">0.491</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.16.5\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"Sx4.T1.1.1.16.5.1\">0.884</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.1.16.6\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"Sx4.T1.1.1.16.6.1\">0.625</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.1.17.1\">CLIP(CATH)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.1.17.2\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"Sx4.T1.1.1.17.2.1\">0.463</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.1.17.3\"><span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"Sx4.T1.1.1.17.3.1\">0.665</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.1.17.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.17.4.1\">0.493</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.1.17.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.17.5.1\">0.885</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.1.17.6\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.17.6.1\">0.627</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Benchmark results on protein representation field. <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.4.1\">Bold</span> denotes the best results while <span class=\"ltx_text ltx_framed ltx_framed_underline\" id=\"Sx4.T1.5.2\">underline</span> indicates the second best value. Two-stream networks like CLIP(CATH) outperform in most tasks and Set-CLIP can effectively reduce the gap between CLIP(CATH) with only half paired data of CATH for supervised alignment.</figcaption>\n</figure>",
|
| 94 |
+
"capture": "Table 1: Benchmark results on protein representation field. Bold denotes the best results while underline indicates the second best value. Two-stream networks like CLIP(CATH) outperform in most tasks and Set-CLIP can effectively reduce the gap between CLIP(CATH) with only half paired data of CATH for supervised alignment."
|
| 95 |
+
},
|
| 96 |
+
"2": {
|
| 97 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"Sx4.T2.25\" style=\"width:373.8pt;height:126pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1.0,1.0) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T2.25.25\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.25.25.26\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T2.25.25.26.1\">Method</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx4.T2.25.25.26.2\">RSICD-CLS</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx4.T2.25.25.26.3\">UCM-CLS</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx4.T2.25.25.26.4\">WHU-RS19</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx4.T2.25.25.26.5\">RSSCN7</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx4.T2.25.25.26.6\">AID</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.25.25.27\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.25.25.27.1\">CLIP(original)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.25.25.27.2\">45.3</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.25.25.27.3\">50.5</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.25.25.27.4\">65.5</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.25.25.27.5\">58.9</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.25.25.27.6\">47.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.5.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.5.5.5.6\">CLIP(fine-tune)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.1.1.1.1\">58.30.3</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.2.2.2.2\">63.53.4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.3.3.3.3\">76.53.2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.4.4.4.4\">61.91.2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.5.5.5.5\">63.11.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.10.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.10.10.10.6\">Hard-PL</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.6.6.6.1\">56.63.5</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.7.7.7.2\">61.62.2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.8.8.8.3\">78.12.5</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.9.9.9.4\">63.92.1</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T2.10.10.10.5\">63.22.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.15.15.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.15.15.15.6\">Soft-PL</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.11.11.11.1\">62.50.8</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.12.12.12.2\">65.72.7</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.13.13.13.3\">83.72.7</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.14.14.14.4\">65.70.6</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.15.15.15.5\">68.00.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.20.20.20\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T2.20.20.20.6\">S-CLIP</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.16.16.16.1\">66.91.7</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.17.17.17.2\">66.71.6</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.18.18.18.3\">86.92.0</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.19.19.19.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.19.19.19.4.1\">66.2</span>1.1</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T2.20.20.20.5\">73.00.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.25.25.25\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T2.25.25.25.6\">Set-CLIP(ours)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.21.21.21.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.21.21.21.1.1\">69.2</span>0.8</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.22.22.22.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.22.22.22.2.1\">67.5</span>1.1</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.23.23.23.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.23.23.23.3.1\">89.0</span>1.6</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.24.24.24.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.24.24.24.4.1\">66.2</span>0.9</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T2.25.25.25.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.25.25.25.5.1\">76.2</span>0.9</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Benchmark results on remote sensing field. <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.27.1\">Bold</span> is the best average results and Set-CLIP improves the performance at most datasets through learning from unsupervised multimodal data.</figcaption>\n</figure>",
|
| 98 |
+
"capture": "Table 2: Benchmark results on remote sensing field. Bold is the best average results and Set-CLIP improves the performance at most datasets through learning from unsupervised multimodal data."
|
| 99 |
+
},
|
| 100 |
+
"3": {
|
| 101 |
+
"table_html": "<figure class=\"ltx_table ltx_figure_panel ltx_align_center\" id=\"Sx4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Benchmark results on general vision-language field. CLIP(1/3) is the baseline and values highlighted in <span class=\"ltx_text\" id=\"Sx4.T3.2.1\" style=\"color:#0C8918;\">green</span> indicate the improvements. Set-CLIP brings benefits across diverse settings, especially in Mini COCO dataset with VIT as encoder.</figcaption>\n</figure>",
|
| 102 |
+
"capture": "Table 3: Benchmark results on general vision-language field. CLIP(1/3) is the baseline and values highlighted in green indicate the improvements. Set-CLIP brings benefits across diverse settings, especially in Mini COCO dataset with VIT as encoder."
|
| 103 |
+
},
|
| 104 |
+
"4": {
|
| 105 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx4.T4\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"Sx4.T4.1\" style=\"width:203.1pt;height:102.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-5.3pt,2.7pt) scale(0.95,0.95) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T4.1.1\">\n<tr class=\"ltx_tr\" id=\"Sx4.T4.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T4.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"Sx4.T4.1.1.1.1.1\">SSL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T4.1.1.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"Sx4.T4.1.1.1.2.1\">SDD</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"Sx4.T4.1.1.1.3\">Fold Classification</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T4.1.1.1.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"Sx4.T4.1.1.1.4.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T4.1.1.1.4.1.1\">\n<span class=\"ltx_tr\" id=\"Sx4.T4.1.1.1.4.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"Sx4.T4.1.1.1.4.1.1.1.1\">EC</span></span>\n</span></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.1.2.1\">Fold</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.1.2.2\">Super</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.1.2.3\">Family</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.1.1.3\">\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"Sx4.T4.1.1.3.1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"Sx4.T4.1.1.3.1.1\" style=\"width:20.6pt;height:5.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-3.4pt,0.9pt) scale(0.75,0.75) ;\"><span class=\"ltx_ERROR undefined\" id=\"Sx4.T4.1.1.3.1.1.1\">\\usym</span>\n<p class=\"ltx_p\" id=\"Sx4.T4.1.1.3.1.1.2\">2613</p>\n</span></div>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.1.3.2\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"Sx4.T4.1.1.3.2.1\" style=\"width:20.6pt;height:5.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-3.4pt,0.9pt) scale(0.75,0.75) ;\"><span class=\"ltx_ERROR undefined\" id=\"Sx4.T4.1.1.3.2.1.1\">\\usym</span>\n<p class=\"ltx_p\" id=\"Sx4.T4.1.1.3.2.1.2\">2613</p>\n</span></div>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.1.3.3\">57.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.1.3.4\">78.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.1.3.5\">99.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T4.1.1.3.6\">0.881</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.1.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.4.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.4.2\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"Sx4.T4.1.1.4.2.1\" style=\"width:20.6pt;height:5.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-3.4pt,0.9pt) scale(0.75,0.75) ;\"><span class=\"ltx_ERROR undefined\" id=\"Sx4.T4.1.1.4.2.1.1\">\\usym</span>\n<p class=\"ltx_p\" id=\"Sx4.T4.1.1.4.2.1.2\">2613</p>\n</span></div>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.4.3\">57.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.4.4\">78.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.4.5\">99.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.4.6\">0.881</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.1.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.5.1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"Sx4.T4.1.1.5.1.1\" style=\"width:20.6pt;height:5.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-3.4pt,0.9pt) scale(0.75,0.75) ;\"><span class=\"ltx_ERROR undefined\" id=\"Sx4.T4.1.1.5.1.1.1\">\\usym</span>\n<p class=\"ltx_p\" id=\"Sx4.T4.1.1.5.1.1.2\">2613</p>\n</span></div>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.5.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.5.3\">58.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.1.1.5.4.1\">80.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.5.5\">99.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T4.1.1.5.6\">0.878</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T4.1.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T4.1.1.6.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T4.1.1.6.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T4.1.1.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.1.1.6.3.1\">59.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T4.1.1.6.4\">79.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T4.1.1.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.1.1.6.5.1\">99.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T4.1.1.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.1.1.6.6.1\">0.884</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Ablation results evaluated on protein representation field to analyze the roles of SSL and SDD. Thereinto, Super denotes Superfamily task and EC is EC number prediction. The values in <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T4.3.1\">bold</span> is the best result at each task.</figcaption>\n</figure>",
|
| 106 |
+
"capture": "Table 4: Ablation results evaluated on protein representation field to analyze the roles of SSL and SDD. Thereinto, Super denotes Superfamily task and EC is EC number prediction. The values in bold is the best result at each task."
|
| 107 |
+
},
|
| 108 |
+
"5": {
|
| 109 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx4.T5\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"Sx4.T5.2\" style=\"width:178.3pt;height:102.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-4.7pt,2.7pt) scale(0.95,0.95) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"Sx4.T5.2.2\">\n<tr class=\"ltx_tr\" id=\"Sx4.T5.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.2.2.2.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"Sx4.T5.2.2.2.3.1\">RD</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"Sx4.T5.2.2.2.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"Sx4.T5.2.2.2.4.1\">KL</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"Sx4.T5.1.1.1.1\">IT R@3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"Sx4.T5.2.2.2.2\">TI R@3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.2.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.2.2.3.1\">50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.2.2.3.2\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.2.2.3.3\">50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.2.2.3.4\">100</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.2.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.2.2.4.1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"Sx4.T5.2.2.4.1.1\" style=\"width:20.6pt;height:5.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-3.4pt,0.9pt) scale(0.75,0.75) ;\"><span class=\"ltx_ERROR undefined\" id=\"Sx4.T5.2.2.4.1.1.1\">\\usym</span>\n<p class=\"ltx_p\" id=\"Sx4.T5.2.2.4.1.1.2\">2613</p>\n</span></div>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.2.2.4.2\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"Sx4.T5.2.2.4.2.1\" style=\"width:20.6pt;height:5.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-3.4pt,0.9pt) scale(0.75,0.75) ;\"><span class=\"ltx_ERROR undefined\" id=\"Sx4.T5.2.2.4.2.1.1\">\\usym</span>\n<p class=\"ltx_p\" id=\"Sx4.T5.2.2.4.2.1.2\">2613</p>\n</span></div>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.2.2.4.3\">28.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.2.2.4.4\">18.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.2.2.4.5\">29.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T5.2.2.4.6\">18.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.2.2.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.5.1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"Sx4.T5.2.2.5.1.1\" style=\"width:20.6pt;height:5.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-3.4pt,0.9pt) scale(0.75,0.75) ;\"><span class=\"ltx_ERROR undefined\" id=\"Sx4.T5.2.2.5.1.1.1\">\\usym</span>\n<p class=\"ltx_p\" id=\"Sx4.T5.2.2.5.1.1.2\">2613</p>\n</span></div>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.5.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.5.3\">29.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.5.4\">18.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.5.5\">30.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.5.6\">18.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.2.2.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.6.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.6.2\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"Sx4.T5.2.2.6.2.1\" style=\"width:20.6pt;height:5.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-3.4pt,0.9pt) scale(0.75,0.75) ;\"><span class=\"ltx_ERROR undefined\" id=\"Sx4.T5.2.2.6.2.1.1\">\\usym</span>\n<p class=\"ltx_p\" id=\"Sx4.T5.2.2.6.2.1.2\">2613</p>\n</span></div>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.6.3\">28.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.6.4\">18.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.6.5\">29.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T5.2.2.6.6\">18.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T5.2.2.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.2.2.7.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.2.2.7.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.2.2.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.2.2.7.3.1\">29.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.2.2.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.2.2.7.4.1\">20.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.2.2.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.2.2.7.5.1\">31.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T5.2.2.7.6\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.2.2.7.6.1\">20.3</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Ablation results on Flickr-8k with ResNet as image encoder to analyze the effects of key modules in SDD. <span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T5.8.1\">Bold</span> indicates the best result. RD denotes relative distance while and represent the retrieval size.</figcaption>\n</figure>",
|
| 110 |
+
"capture": "Table 5: Ablation results on Flickr-8k with ResNet as image encoder to analyze the effects of key modules in SDD. Bold indicates the best result. RD denotes relative distance while and represent the retrieval size."
|
| 111 |
+
}
|
| 112 |
+
},
|
| 113 |
+
"image_paths": {
|
| 114 |
+
"1(a)": {
|
| 115 |
+
"figure_path": "2406.05766v2_figure_1(a).png",
|
| 116 |
+
"caption": "(a) CLIP\nFigure 1: Comparison among CLIP, S-CLIP and Set-CLIP on how to adopt unpaired multimodal data. (a) CLIP only uses the matched data for multimodal fusion while ignores the valuable information in unlabeled data. (b) S-CLIP attempts to improve the alignment performance by two pseudo-labeling losses but it limits itself to the language modality and heavily relies on the way how to measure similarity between samples. (c) Set-CLIP tries to explore more latent information from unmatched multimodal data by fine-grained distribution alignment, which is based on data themselves without much expert knowledge.",
|
| 117 |
+
"url": "http://arxiv.org/html/2406.05766v2/x1.png"
|
| 118 |
+
},
|
| 119 |
+
"1(b)": {
|
| 120 |
+
"figure_path": "2406.05766v2_figure_1(b).png",
|
| 121 |
+
"caption": "(b) S-CLIP\nFigure 1: Comparison among CLIP, S-CLIP and Set-CLIP on how to adopt unpaired multimodal data. (a) CLIP only uses the matched data for multimodal fusion while ignores the valuable information in unlabeled data. (b) S-CLIP attempts to improve the alignment performance by two pseudo-labeling losses but it limits itself to the language modality and heavily relies on the way how to measure similarity between samples. (c) Set-CLIP tries to explore more latent information from unmatched multimodal data by fine-grained distribution alignment, which is based on data themselves without much expert knowledge.",
|
| 122 |
+
"url": "http://arxiv.org/html/2406.05766v2/x2.png"
|
| 123 |
+
},
|
| 124 |
+
"1(c)": {
|
| 125 |
+
"figure_path": "2406.05766v2_figure_1(c).png",
|
| 126 |
+
"caption": "(c) Set-CLIP\nFigure 1: Comparison among CLIP, S-CLIP and Set-CLIP on how to adopt unpaired multimodal data. (a) CLIP only uses the matched data for multimodal fusion while ignores the valuable information in unlabeled data. (b) S-CLIP attempts to improve the alignment performance by two pseudo-labeling losses but it limits itself to the language modality and heavily relies on the way how to measure similarity between samples. (c) Set-CLIP tries to explore more latent information from unmatched multimodal data by fine-grained distribution alignment, which is based on data themselves without much expert knowledge.",
|
| 127 |
+
"url": "http://arxiv.org/html/2406.05766v2/x3.png"
|
| 128 |
+
},
|
| 129 |
+
"2": {
|
| 130 |
+
"figure_path": "2406.05766v2_figure_2.png",
|
| 131 |
+
"caption": "Figure 2: The overall framework of Set-CLIP. Here, N\ud835\udc41Nitalic_N is the number of matched pairs, while M\ud835\udc40Mitalic_M denotes the unlabeled scale. Pink indicates the loss objectives. Thereinto, MK-MMD is used to narrow the gap between the distribution spaces of different modalities, and SDD operates to reduce the differences between latent distributions in a fine-grained way. Applying SSL can enhance the robustness of representation, and the contrastive loss in CLIP ensures the proper optimization direction.",
|
| 132 |
+
"url": "http://arxiv.org/html/2406.05766v2/x4.png"
|
| 133 |
+
},
|
| 134 |
+
"3": {
|
| 135 |
+
"figure_path": "2406.05766v2_figure_3.png",
|
| 136 |
+
"caption": "Figure 3: Retrieval results of CLIP and Set-CLIP under completely unsupervised scenario.\n",
|
| 137 |
+
"url": "http://arxiv.org/html/2406.05766v2/x5.png"
|
| 138 |
+
}
|
| 139 |
+
},
|
| 140 |
+
"validation": true,
|
| 141 |
+
"references": [
|
| 142 |
+
{
|
| 143 |
+
"1": {
|
| 144 |
+
"title": "Pseudo-labeling and confirmation bias in deep semi-supervised learning.",
|
| 145 |
+
"author": "Arazo, E.; Ortego, D.; Albert, P.; O\u2019Connor, N. E.; and McGuinness, K. 2020.",
|
| 146 |
+
"venue": "In 2020 International joint conference on neural networks (IJCNN), 1\u20138. IEEE.",
|
| 147 |
+
"url": null
|
| 148 |
+
}
|
| 149 |
+
},
|
| 150 |
+
{
|
| 151 |
+
"2": {
|
| 152 |
+
"title": "Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE).",
|
| 153 |
+
"author": "Bhalla, U.; Oesterling, A.; Srinivas, S.; Calmon, F. P.; and Lakkaraju, H. 2024.",
|
| 154 |
+
"venue": "arXiv preprint arXiv:2402.10376.",
|
| 155 |
+
"url": null
|
| 156 |
+
}
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"3": {
|
| 160 |
+
"title": "Context recovery and knowledge retrieval: A novel two-stream framework for video anomaly detection.",
|
| 161 |
+
"author": "Cao, C.; Lu, Y.; and Zhang, Y. 2024.",
|
| 162 |
+
"venue": "IEEE Transactions on Image Processing.",
|
| 163 |
+
"url": null
|
| 164 |
+
}
|
| 165 |
+
},
|
| 166 |
+
{
|
| 167 |
+
"4": {
|
| 168 |
+
"title": "Curriculum labeling: Revisiting pseudo-labeling for semi-supervised learning.",
|
| 169 |
+
"author": "Cascante-Bonilla, P.; Tan, F.; Qi, Y.; and Ordonez, V. 2021.",
|
| 170 |
+
"venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 35, 6912\u20136920.",
|
| 171 |
+
"url": null
|
| 172 |
+
}
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"5": {
|
| 176 |
+
"title": "A simple framework for contrastive learning of visual representations.",
|
| 177 |
+
"author": "Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020.",
|
| 178 |
+
"venue": "In International conference on machine learning, 1597\u20131607. PMLR.",
|
| 179 |
+
"url": null
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"6": {
|
| 184 |
+
"title": "Pali: A jointly-scaled multilingual language-image model.",
|
| 185 |
+
"author": "Chen, X.; Wang, X.; Changpinyo, S.; Piergiovanni, A.; Padlewski, P.; Salz, D.; Goodman, S.; Grycner, A.; Mustafa, B.; Beyer, L.; et al. 2022.",
|
| 186 |
+
"venue": "arXiv preprint arXiv:2209.06794.",
|
| 187 |
+
"url": null
|
| 188 |
+
}
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"7": {
|
| 192 |
+
"title": "Vision\u2013language foundation model for echocardiogram interpretation.",
|
| 193 |
+
"author": "Christensen, M.; Vukadinovic, M.; Yuan, N.; and Ouyang, D. 2024.",
|
| 194 |
+
"venue": "Nature Medicine, 1\u20138.",
|
| 195 |
+
"url": null
|
| 196 |
+
}
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"8": {
|
| 200 |
+
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding.",
|
| 201 |
+
"author": "Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018.",
|
| 202 |
+
"venue": "arXiv preprint arXiv:1810.04805.",
|
| 203 |
+
"url": null
|
| 204 |
+
}
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"9": {
|
| 208 |
+
"title": "An image is worth 16x16 words: Transformers for image recognition at scale.",
|
| 209 |
+
"author": "Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020.",
|
| 210 |
+
"venue": "arXiv preprint arXiv:2010.11929.",
|
| 211 |
+
"url": null
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"10": {
|
| 216 |
+
"title": "On uni-modal feature learning in supervised multi-modal learning.",
|
| 217 |
+
"author": "Du, C.; Teng, J.; Li, T.; Liu, Y.; Yuan, T.; Wang, Y.; Yuan, Y.; and Zhao, H. 2023.",
|
| 218 |
+
"venue": "In International Conference on Machine Learning, 8632\u20138656. PMLR.",
|
| 219 |
+
"url": null
|
| 220 |
+
}
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"11": {
|
| 224 |
+
"title": "Continuous-discrete convolution for geometry-sequence modeling in proteins.",
|
| 225 |
+
"author": "Fan, H.; Wang, Z.; Yang, Y.; and Kankanhalli, M. 2022.",
|
| 226 |
+
"venue": "In The Eleventh International Conference on Learning Representations.",
|
| 227 |
+
"url": null
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"12": {
|
| 232 |
+
"title": "A survey on deep learning for multimodal data fusion.",
|
| 233 |
+
"author": "Gao, J.; Li, P.; Chen, Z.; and Zhang, J. 2020.",
|
| 234 |
+
"venue": "Neural Computation, 32(5): 829\u2013864.",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"13": {
|
| 240 |
+
"title": "Simcse: Simple contrastive learning of sentence embeddings.",
|
| 241 |
+
"author": "Gao, T.; Yao, X.; and Chen, D. 2021.",
|
| 242 |
+
"venue": "arXiv preprint arXiv:2104.08821.",
|
| 243 |
+
"url": null
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"14": {
|
| 248 |
+
"title": "Softclip: Softer cross-modal alignment makes clip stronger.",
|
| 249 |
+
"author": "Gao, Y.; Liu, J.; Xu, Z.; Wu, T.; Zhang, E.; Li, K.; Yang, J.; Liu, W.; and Sun, X. 2024.",
|
| 250 |
+
"venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, 1860\u20131868.",
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"15": {
|
| 256 |
+
"title": "Structure-based protein function prediction using graph convolutional networks.",
|
| 257 |
+
"author": "Gligorijevi\u0107, V.; Renfrew, P. D.; Kosciolek, T.; Leman, J. K.; Berenberg, D.; Vatanen, T.; Chandler, C.; Taylor, B. C.; Fisk, I. M.; Vlamakis, H.; et al. 2021.",
|
| 258 |
+
"venue": "Nature communications, 12(1): 3168.",
|
| 259 |
+
"url": null
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"16": {
|
| 264 |
+
"title": "Momentum contrast for unsupervised visual representation learning.",
|
| 265 |
+
"author": "He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020.",
|
| 266 |
+
"venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9729\u20139738.",
|
| 267 |
+
"url": null
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"17": {
|
| 272 |
+
"title": "Deep residual learning for image recognition.",
|
| 273 |
+
"author": "He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016.",
|
| 274 |
+
"venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, 770\u2013778.",
|
| 275 |
+
"url": null
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"18": {
|
| 280 |
+
"title": "Intrinsic-extrinsic convolution and pooling for learning on 3d protein structures.",
|
| 281 |
+
"author": "Hermosilla, P.; Sch\u00e4fer, M.; Lang, M.; Fackelmann, G.; V\u00e1zquez, P. P.; Kozl\u00edkov\u00e1, B.; Krone, M.; Ritschel, T.; and Ropinski, T. 2020.",
|
| 282 |
+
"venue": "arXiv preprint arXiv:2007.06252.",
|
| 283 |
+
"url": null
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"19": {
|
| 288 |
+
"title": "Framing image description as a ranking task: Data, models and evaluation metrics.",
|
| 289 |
+
"author": "Hodosh, M.; Young, P.; and Hockenmaier, J. 2013.",
|
| 290 |
+
"venue": "Journal of Artificial Intelligence Research, 47: 853\u2013899.",
|
| 291 |
+
"url": null
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"20": {
|
| 296 |
+
"title": "The Platonic Representation Hypothesis.",
|
| 297 |
+
"author": "Huh, M.; Cheung, B.; Wang, T.; and Isola, P. 2024.",
|
| 298 |
+
"venue": "arXiv preprint arXiv:2405.07987.",
|
| 299 |
+
"url": null
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"21": {
|
| 304 |
+
"title": "Scaling up visual and vision-language representation learning with noisy text supervision.",
|
| 305 |
+
"author": "Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021.",
|
| 306 |
+
"venue": "In International conference on machine learning, 4904\u20134916. PMLR.",
|
| 307 |
+
"url": null
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"22": {
|
| 312 |
+
"title": "Vilt: Vision-and-language transformer without convolution or region supervision.",
|
| 313 |
+
"author": "Kim, W.; Son, B.; and Kim, I. 2021.",
|
| 314 |
+
"venue": "In International Conference on Machine Learning, 5583\u20135594. PMLR.",
|
| 315 |
+
"url": null
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"23": {
|
| 320 |
+
"title": "Self-supervised learning in medicine and healthcare.",
|
| 321 |
+
"author": "Krishnan, R.; Rajpurkar, P.; and Topol, E. J. 2022.",
|
| 322 |
+
"venue": "Nature Biomedical Engineering, 6(12): 1346\u20131352.",
|
| 323 |
+
"url": null
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"24": {
|
| 328 |
+
"title": "Deep learning in multimodal remote sensing data fusion: A comprehensive review.",
|
| 329 |
+
"author": "Li, J.; Hong, D.; Gao, L.; Yao, J.; Zheng, K.; Zhang, B.; and Chanussot, J. 2022a.",
|
| 330 |
+
"venue": "International Journal of Applied Earth Observation and Geoinformation, 112: 102926.",
|
| 331 |
+
"url": null
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"25": {
|
| 336 |
+
"title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation.",
|
| 337 |
+
"author": "Li, J.; Li, D.; Xiong, C.; and Hoi, S. 2022b.",
|
| 338 |
+
"venue": "In International Conference on Machine Learning, 12888\u201312900. PMLR.",
|
| 339 |
+
"url": null
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"26": {
|
| 344 |
+
"title": "Align before fuse: Vision and language representation learning with momentum distillation.",
|
| 345 |
+
"author": "Li, J.; Selvaraju, R.; Gotmare, A.; Joty, S.; Xiong, C.; and Hoi, S. C. H. 2021.",
|
| 346 |
+
"venue": "Advances in neural information processing systems, 34: 9694\u20139705.",
|
| 347 |
+
"url": null
|
| 348 |
+
}
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"27": {
|
| 352 |
+
"title": "Grounded language-image pre-training.",
|
| 353 |
+
"author": "Li, L. H.; Zhang, P.; Zhang, H.; Yang, J.; Li, C.; Zhong, Y.; Wang, L.; Yuan, L.; Zhang, L.; Hwang, J.-N.; et al. 2022c.",
|
| 354 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10965\u201310975.",
|
| 355 |
+
"url": null
|
| 356 |
+
}
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"28": {
|
| 360 |
+
"title": "Scaling language-image pre-training via masking.",
|
| 361 |
+
"author": "Li, Y.; Fan, H.; Hu, R.; Feichtenhofer, C.; and He, K. 2023.",
|
| 362 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 23390\u201323400.",
|
| 363 |
+
"url": null
|
| 364 |
+
}
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"29": {
|
| 368 |
+
"title": "Language models of protein sequences at the scale of evolution enable accurate structure prediction.",
|
| 369 |
+
"author": "Lin, Z.; Akin, H.; Rao, R.; Hie, B.; Zhu, Z.; Lu, W.; dos Santos Costa, A.; Fazel-Zarandi, M.; Sercu, T.; Candido, S.; et al. 2022.",
|
| 370 |
+
"venue": "BioRxiv, 2022: 500902.",
|
| 371 |
+
"url": null
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"30": {
|
| 376 |
+
"title": "Graph self-supervised learning: A survey.",
|
| 377 |
+
"author": "Liu, Y.; Jin, M.; Pan, S.; Zhou, C.; Zheng, Y.; Xia, F.; and Philip, S. Y. 2022.",
|
| 378 |
+
"venue": "IEEE transactions on knowledge and data engineering, 35(6): 5879\u20135900.",
|
| 379 |
+
"url": null
|
| 380 |
+
}
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"31": {
|
| 384 |
+
"title": "Learning transferable features with deep adaptation networks.",
|
| 385 |
+
"author": "Long, M.; Cao, Y.; Wang, J.; and Jordan, M. 2015.",
|
| 386 |
+
"venue": "In International conference on machine learning, 97\u2013105. PMLR.",
|
| 387 |
+
"url": null
|
| 388 |
+
}
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"32": {
|
| 392 |
+
"title": "Multimodality in VR: A survey.",
|
| 393 |
+
"author": "Martin, D.; Malpica, S.; Gutierrez, D.; Masia, B.; and Serrano, A. 2022.",
|
| 394 |
+
"venue": "ACM Computing Surveys (CSUR), 54(10s): 1\u201336.",
|
| 395 |
+
"url": null
|
| 396 |
+
}
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"33": {
|
| 400 |
+
"title": "S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions.",
|
| 401 |
+
"author": "Mo, S.; Kim, M.; Lee, K.; and Shin, J. 2023.",
|
| 402 |
+
"venue": "arXiv preprint arXiv:2305.14095.",
|
| 403 |
+
"url": null
|
| 404 |
+
}
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"34": {
|
| 408 |
+
"title": "Slip: Self-supervision meets language-image pre-training.",
|
| 409 |
+
"author": "Mu, N.; Kirillov, A.; Wagner, D.; and Xie, S. 2022.",
|
| 410 |
+
"venue": "In European conference on computer vision, 529\u2013544. Springer.",
|
| 411 |
+
"url": null
|
| 412 |
+
}
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"35": {
|
| 416 |
+
"title": "Revisiting self-distillation.",
|
| 417 |
+
"author": "Pham, M.; Cho, M.; Joshi, A.; and Hegde, C. 2022.",
|
| 418 |
+
"venue": "arXiv preprint arXiv:2206.08491.",
|
| 419 |
+
"url": null
|
| 420 |
+
}
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"36": {
|
| 424 |
+
"title": "Learning transferable visual models from natural language supervision.",
|
| 425 |
+
"author": "Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021.",
|
| 426 |
+
"venue": "In International conference on machine learning, 8748\u20138763. PMLR.",
|
| 427 |
+
"url": null
|
| 428 |
+
}
|
| 429 |
+
},
|
| 430 |
+
{
|
| 431 |
+
"37": {
|
| 432 |
+
"title": "ECOR: Explainable CLIP for Object Recognition.",
|
| 433 |
+
"author": "Rasekh, A.; Ranjbar, S. K.; Heidari, M.; and Nejdl, W. 2024.",
|
| 434 |
+
"venue": "arXiv preprint arXiv:2404.12839.",
|
| 435 |
+
"url": null
|
| 436 |
+
}
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"38": {
|
| 440 |
+
"title": "High-resolution image synthesis with latent diffusion models.",
|
| 441 |
+
"author": "Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022.",
|
| 442 |
+
"venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 10684\u201310695.",
|
| 443 |
+
"url": null
|
| 444 |
+
}
|
| 445 |
+
},
|
| 446 |
+
{
|
| 447 |
+
"39": {
|
| 448 |
+
"title": "Nonlinear discriminant analysis using kernel functions.",
|
| 449 |
+
"author": "Roth, V.; and Steinhage, V. 1999.",
|
| 450 |
+
"venue": "Advances in neural information processing systems, 12.",
|
| 451 |
+
"url": null
|
| 452 |
+
}
|
| 453 |
+
},
|
| 454 |
+
{
|
| 455 |
+
"40": {
|
| 456 |
+
"title": "Flava: A foundational language and vision alignment model.",
|
| 457 |
+
"author": "Singh, A.; Hu, R.; Goswami, V.; Couairon, G.; Galuba, W.; Rohrbach, M.; and Kiela, D. 2022.",
|
| 458 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15638\u201315650.",
|
| 459 |
+
"url": null
|
| 460 |
+
}
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"41": {
|
| 464 |
+
"title": "A survey on semi-supervised learning.",
|
| 465 |
+
"author": "Van Engelen, J. E.; and Hoos, H. H. 2020.",
|
| 466 |
+
"venue": "Machine learning, 109(2): 373\u2013440.",
|
| 467 |
+
"url": null
|
| 468 |
+
}
|
| 469 |
+
},
|
| 470 |
+
{
|
| 471 |
+
"42": {
|
| 472 |
+
"title": "Attention is all you need.",
|
| 473 |
+
"author": "Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, \u0141.; and Polosukhin, I. 2017.",
|
| 474 |
+
"venue": "Advances in neural information processing systems, 30.",
|
| 475 |
+
"url": null
|
| 476 |
+
}
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"43": {
|
| 480 |
+
"title": "Medclip: Contrastive learning from unpaired medical images and text.",
|
| 481 |
+
"author": "Wang, Z.; Wu, Z.; Agarwal, D.; and Sun, J. 2022.",
|
| 482 |
+
"venue": "arXiv preprint arXiv:2210.10163.",
|
| 483 |
+
"url": null
|
| 484 |
+
}
|
| 485 |
+
},
|
| 486 |
+
{
|
| 487 |
+
"44": {
|
| 488 |
+
"title": "Open-vclip: Transforming clip to an open-vocabulary video model via interpolated weight optimization.",
|
| 489 |
+
"author": "Weng, Z.; Yang, X.; Li, A.; Wu, Z.; and Jiang, Y.-G. 2023.",
|
| 490 |
+
"venue": "In International Conference on Machine Learning, 36978\u201336989. PMLR.",
|
| 491 |
+
"url": null
|
| 492 |
+
}
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"45": {
|
| 496 |
+
"title": "Clipself: Vision transformer distills itself for open-vocabulary dense prediction.",
|
| 497 |
+
"author": "Wu, S.; Zhang, W.; Xu, L.; Jin, S.; Li, X.; Liu, W.; and Loy, C. C. 2023.",
|
| 498 |
+
"venue": "arXiv preprint arXiv:2310.01403.",
|
| 499 |
+
"url": null
|
| 500 |
+
}
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"46": {
|
| 504 |
+
"title": "Protst: Multi-modality learning of protein sequences and biomedical texts.",
|
| 505 |
+
"author": "Xu, M.; Yuan, X.; Miret, S.; and Tang, J. 2023a.",
|
| 506 |
+
"venue": "arXiv preprint arXiv:2301.12040.",
|
| 507 |
+
"url": null
|
| 508 |
+
}
|
| 509 |
+
},
|
| 510 |
+
{
|
| 511 |
+
"47": {
|
| 512 |
+
"title": "vONTSS: vMF based semi-supervised neural topic modeling with optimal transport.",
|
| 513 |
+
"author": "Xu, W.; Jiang, X.; Sengamedu, S. H.; Iannacci, F.; and Zhao, J. 2023b.",
|
| 514 |
+
"venue": "arXiv preprint arXiv:2307.01226.",
|
| 515 |
+
"url": null
|
| 516 |
+
}
|
| 517 |
+
},
|
| 518 |
+
{
|
| 519 |
+
"48": {
|
| 520 |
+
"title": "A survey on deep semi-supervised learning.",
|
| 521 |
+
"author": "Yang, X.; Song, Z.; King, I.; and Xu, Z. 2022.",
|
| 522 |
+
"venue": "IEEE Transactions on Knowledge and Data Engineering, 35(9): 8934\u20138954.",
|
| 523 |
+
"url": null
|
| 524 |
+
}
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"49": {
|
| 528 |
+
"title": "DMT-EV: An Explainable Deep Network for Dimension Reduction.",
|
| 529 |
+
"author": "Zang, Z.; Cheng, S.; Xia, H.; Li, L.; Sun, Y.; Xu, Y.; Shang, L.; Sun, B.; and Li, S. Z. 2024.",
|
| 530 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics, 30(3): 1710\u20131727.",
|
| 531 |
+
"url": null
|
| 532 |
+
}
|
| 533 |
+
},
|
| 534 |
+
{
|
| 535 |
+
"50": {
|
| 536 |
+
"title": "Protein Representation Learning by Geometric Structure Pretraining.",
|
| 537 |
+
"author": "Zhang, Z.; Xu, M.; Jamasb, A. R.; Chenthamarakshan, V.; Lozano, A.; Das, P.; and Tang, J. 2023.",
|
| 538 |
+
"venue": "In The Eleventh International Conference on Learning Representations.",
|
| 539 |
+
"url": null
|
| 540 |
+
}
|
| 541 |
+
},
|
| 542 |
+
{
|
| 543 |
+
"51": {
|
| 544 |
+
"title": "Lightweight Contrastive Protein Structure-Sequence Transformation.",
|
| 545 |
+
"author": "Zheng, J.; Wang, G.; Huang, Y.; Hu, B.; Li, S.; Tan, C.; Fan, X.; and Li, S. Z. 2023a.",
|
| 546 |
+
"venue": "arXiv preprint arXiv:2303.11783.",
|
| 547 |
+
"url": null
|
| 548 |
+
}
|
| 549 |
+
},
|
| 550 |
+
{
|
| 551 |
+
"52": {
|
| 552 |
+
"title": "Semi-supervised offline reinforcement learning with action-free trajectories.",
|
| 553 |
+
"author": "Zheng, Q.; Henaff, M.; Amos, B.; and Grover, A. 2023b.",
|
| 554 |
+
"venue": "In International conference on machine learning, 42339\u201342362. PMLR.",
|
| 555 |
+
"url": null
|
| 556 |
+
}
|
| 557 |
+
},
|
| 558 |
+
{
|
| 559 |
+
"53": {
|
| 560 |
+
"title": "Semi-supervised domain generalization with stochastic stylematch.",
|
| 561 |
+
"author": "Zhou, K.; Loy, C. C.; and Liu, Z. 2023.",
|
| 562 |
+
"venue": "International Journal of Computer Vision, 131(9): 2377\u20132387.",
|
| 563 |
+
"url": null
|
| 564 |
+
}
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"54": {
|
| 568 |
+
"title": "Unsupervised vision-and-language pre-training via retrieval-based multi-granular alignment.",
|
| 569 |
+
"author": "Zhou, M.; Yu, L.; Singh, A.; Wang, M.; Yu, Z.; and Zhang, N. 2022.",
|
| 570 |
+
"venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16485\u201316494.",
|
| 571 |
+
"url": null
|
| 572 |
+
}
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"55": {
|
| 576 |
+
"title": "Anomalyclip: Object-agnostic prompt learning for zero-shot anomaly detection.",
|
| 577 |
+
"author": "Zhou, Q.; Pang, G.; Tian, Y.; He, S.; and Chen, J. 2023.",
|
| 578 |
+
"venue": "arXiv preprint arXiv:2310.18961.",
|
| 579 |
+
"url": null
|
| 580 |
+
}
|
| 581 |
+
},
|
| 582 |
+
{
|
| 583 |
+
"56": {
|
| 584 |
+
"title": "Self-Supervised Multimodal Learning: A Survey.",
|
| 585 |
+
"author": "Zong, Y.; Mac Aodha, O.; and Hospedales, T. 2024.",
|
| 586 |
+
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence.",
|
| 587 |
+
"url": null
|
| 588 |
+
}
|
| 589 |
+
}
|
| 590 |
+
],
|
| 591 |
+
"url": "http://arxiv.org/html/2406.05766v2"
|
| 592 |
+
}
|
20240921/2406.06799v2.json
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "LLM-dCache: Improving Tool-Augmented LLMs with GPT-Driven Localized Data Caching",
|
| 3 |
+
"abstract": "As Large Language Models (LLMs) broaden their capabilities to manage thousands of API calls, they are confronted with complex data operations across vast datasets with significant overhead to the underlying system. In this work, we introduce LLM-dCache to optimize data accesses by treating cache operations as callable API functions exposed to the tool-augmented agent. We grant LLMs the autonomy to manage cache decisions via prompting, seamlessly integrating with existing function-calling mechanisms. Tested on an industry-scale massively parallel platform that spans hundreds of GPT endpoints and terabytes of imagery, our method improves Copilot times by an average of 1.24 across various LLMs and prompting techniques.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Recent advances in Large Language Models (LLMs) have enhanced their reasoning capabilities towards solving complex problems, allowing them to manage thousands of tools and API calls efficiently [1 ###reference_b1###, 2 ###reference_b2###]. These improvements have unlocked their potential across large-scale systems, including UI/Web interfaces [3 ###reference_b3###, 4 ###reference_b4###], mobile apps [5 ###reference_b5###], SQL backends [6 ###reference_b6###], and remote sensing platforms [7 ###reference_b7###]. These uses exemplify system-level complexity by requiring integration of various APIs for loading, filtering, processing, and visualizing data across multiple temporal and spatial dimensions [8 ###reference_b8###].\nAs Copilots scale, the overhead on the underlying stack increases, from cloud endpoints to local execution devices [9 ###reference_b9###, 10 ###reference_b10###], catalyzing a fundamental shift in how we design large-scale LLM-based systems and software [11 ###reference_b11###, 4 ###reference_b4###]. However, early system optimizations primarily target simplified queries or well-defined benchmarks [12 ###reference_b12###] that might not capture nuanced task patterns and data dependencies at the system level [13 ###reference_b13###]. In realistic LLM workloads, data exhibits significant reusability. Consider an analyst who asks \u201cshow me satellite images around Newport Beach, CA.\u201d with a subsequent prompt \u201cNow, detect airplanes in this area,\u201d demonstrating a scenario where data elements are repeatedly accessed.\nIn this work, we draw inspiration from spatiotemporal reusability patterns akin to those observed in CPU cache systems and we introduce LLM-dCache, a GPT-driven caching strategy to optimize LLM data access patterns. Our key intuition lies in a novel design choice to seamlessly integrate cache management as one of the LLM tools, facilitating a fully GPT-driven plug-and-play approach compatible with existing function-calling mechanisms with minimal overhead. Evaluated on a large-scale geospatial platform [13 ###reference_b13###], LLM-dCache achieves latency reductions across various agents. We hope these findings motivate further exploration into empowering LLMs with other system level optimizations."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related work",
|
| 15 |
+
"text": "Model-level LLM optimization: several works aim to enhance LLM efficiency via model design improvements, such as quantization [14 ###reference_b14###], pruning [15 ###reference_b15###], KV token caching [16 ###reference_b16###], or token compression [17 ###reference_b17###]. Despite these advances, as motivated in [11 ###reference_b11###], these techniques might have limitation in scenarios involving immutable black-box LLM models within cloud-based systems, where direct modifications to models and their inference mechanisms are limited [18 ###reference_b18###]. We therefore focus on design optimizations at the system level [19 ###reference_b19###, 20 ###reference_b20###], which are especially important in large-scale Copilot platforms.\nApplication-level LLM optimization: methodologies such as MemGPT [19 ###reference_b19###] and \u201cmodel-as-a-resource\u201d caching [21 ###reference_b21###] align with our motivation. We also note advancements from the open-source community, with LangChain now supporting prompt-caching [22 ###reference_b22###]. Similarly, drawing from hardware design and parallel computing, recent methods [11 ###reference_b11###, 12 ###reference_b12###, 20 ###reference_b20###] explore parallel execution strategies. While these methods offer benefits for parallel or repeating tasks, they overlook the critical aspect of data locality, as they assume task chains with short-horizons [11 ###reference_b11###] or template-based question-answer pairs with simplistic task interdependencies [12 ###reference_b12###]."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Methodology",
|
| 21 |
+
"text": "Our goal is to design and assess LLM-dCache on realistic data patterns in large-scale cloud-first Copilot systems.\nCache operations: We aim to explore GPT\u2019s ability to understand when to read and use caching to execute a given task, as well as whether GPT is able to effectively implement a cache update policy autonomously. To allow GPT to handle system-level decisions via in-context prompting, we therefore define the operation of loading cache data as a tool in GPT function calling, i.e., exposing its function definition in the GPT API call alongside other tool descriptions. Upon receiving a user query, GPT is informed of the current cache contents and decides whether to execute the cache loading tool.\nSimilarly, we experiment with an entirely prompt-based implementation of cache updating. We succinctly describe the update policy to GPT and furnish it with this round\u2019s load operations and cache contents in JSON format, then query GPT to return the updated cache state. We opt for the Least Recently Used (LRU) scheme as our primary cache update strategy, while we ablate other schemes.\nFraming caching functions as GPT tools streamlines our implementation and makes it platform-agnostic. The cache read and update operations become part of GPT\u2019s decision-making process, thus requiring minimal changes. Additionally, granting the LLM autonomy over cache decisions allows our method to handle cache misses: upon a failed function call, the LLM is prompted to reassess its tool sequence, just as it would any other tool-selection missteps where the API return-message indicates a failure. This abstraction, simulating a main memory read after a cache-hit scenario and managed entirely at the LLM level, effectively positions the LLM as a memory controller. Such dynamic adaptability is key to rectifying inaccuracies in tool selection in real-time.\nCache specifications: We represent and retrieve data as key-value pairs. As we operate on top of geospatial data, we opt to use the string template dataset-year as cache keys. We find this temporal granularity to be the most sensible (as opposed to longitude-latitude coordinates due to the spatial skewness of data around regions of interest like major cities). We then store as values the GeoPandas DataFrames containing the respective yearly imagery metadata \u2013 filenames, coordinates, detections, timestamps, etc. As is common in many geospatial platforms, the actual image files are not loaded into memory until needed for specific subsequent operations. As the yearly GeoPandas DataFrames typically occupy 50-100 MB, so we find it reasonable to set a cache size limit of 5 entries at a time. We note that such design choices are likely to be application specific, and we leave further ablations for future work.\n###table_1###"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV Experimental Setup",
|
| 27 |
+
"text": "We use GeoLLM-Engine [13 ###reference_b13###], a large-scale parameterizable LLM engine for geospatial tasks. Designed to capture agentic performance across hundreds of tools, the platform is equipped with long-horizon multi-tool LLM operations that require frequent data retrieval and filtering, a comprehensive suite of open-source APIs, interactive map UI, RAG, and data retrieval tools with over 1.1 million satellite images.\nBenchmark. We expand the GeoLLM-Engine sampler to obtain variants of the GeoLLM-Engine-1k dataset. Specifically, we extend the sampling-rate parameters and we incorporate rates that control the likelihood of data reuse. We selectively sample prompts with an 80% probability of requiring data already present in the cache, constructing a test dataset of 1,000 multi-step prompts (with an overall set of approximately 50,000 tool calls). Additionally, we prepare a mini 500 query set for ablations. Last, we use the model-checker module to verify the functional correctness of the generated tasks.\nMetrics. For agent performance, we adhere to established evaluation practices [1 ###reference_b1###, 7 ###reference_b7###], measuring the Success Rate (proportion of tasks successfully completed), the Correctness Ratio (proportion of correct tool calls, since an erroneous tool might not affect successful task completion), and the ROUGE-L score. We also report performance on the underlying remote sensing tasks, with F1 and recall for object detection and land coverage classification (LCC), respectively, and ROUGE for visual question answering (VQA) [13 ###reference_b13###].\nTo evaluate cache effectiveness, we report GPT-hits (i.e., the LLM correctly utilizes the cache over main memory). We also track the average number of tokens and time per task, with an expectation that higher cache reuse (being 5-10 faster than main memory access) will result in reduced overall API completion times. To capture latency, we follow [20 ###reference_b20###] by maintaining a running average per tool operation, discarding any outliers beyond two standard deviations from the mean. To avoid congestion and ensure accurate endpoint response times, we deploy hundreds of GPT instances specifically for this evaluation, isolated from production traffic."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Results",
|
| 33 |
+
"text": "LLM-dCache improves task-completion times across different configurations \u2013 GPT-4 and GPT-3.5, with Chain-of-Thought and ReAct, in both few-shot and zero-shot scenarios \u2013 by 1.24 on average (Table I ###reference_###). Caching does not degrade the quality of output and functionality of the agent, as agent metrics are within established variance [20 ###reference_b20###]. Overall, we notice that gains primarily depend on dataset reusability patterns, not the choice of model or prompting strategy.\nTo corroborate this observation, we conduct an ablation with multiple mini-val subsets, each containing 500 queries but with varying reusability rates. Table II ###reference_### (top) shows higher reusability rates correlate with greater latency savings. LRU, LFU, RR, and FIFO produce no clear latency differences.\nWe aim to position our exploration within a broader shift towards empowering LLMs with system-level optimization decisions. To this end, we make the deliberate choice of treating cache operations as prompt-based GPT tools (e.g., explaining the LRU scheme via prompts) instead of a direct programmatic implementation of the logic. In support of this, our ablation in Table III ###reference_### compares programmatic cache operations with those driven by GPT. We find that all GPT-driven variants closely match the fully programmatic approach, which could be considered an upper-bound in terms of effectiveness and reliability, with cache \u201chit\u201d rates consistently around 97% and similar latency. This demonstrates the versatility and potential of LLM-guided cache management in lieu of traditional programmatic solutions. Our hope is that this perspective will motivate work for integrating LLMs into other system design optimizations [23 ###reference_b23###], from execution at the edge [24 ###reference_b24###] to energy/power optimizations and thermal management [25 ###reference_b25###].\nLimitations and future work. Our study focuses on agentic performance and average latency for cloud-first environments with extensive use of cloud endpoints. It is meaningful to include more system performance metrics, such as energy and power consumption. To this end, we will explore GPT alternatives that can be run locally, such as Llama-3 and Phi-3.5. Given that our approach implements cache operations as callable API tools, we should be able to seamlessly incorporate this with other non-GPT tool-augmented agents across different computational environments. Last, we plan to extend our evaluation beyond the geospatial domain to a wider range of orthogonal tasks also considered in recent system-level LLM optimization papers [11 ###reference_b11###, 10 ###reference_b10###]."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "6",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "VI Conclusion",
|
| 39 |
+
"text": "In this paper, we introduced LLM-dCache, a framework designed to optimize LLM data access patterns through a cache mechanism treated as callable API tools. By allowing LLMs to autonomously manage cache operations, we integrated caching with existing function-calling mechanisms, enabling improvements in system efficiency across various models and prompting techniques. Our work underscores the potential of leveraging LLMs for system-level optimizations in complex data-intensive environments."
|
| 40 |
+
}
|
| 41 |
+
],
|
| 42 |
+
"appendix": [],
|
| 43 |
+
"tables": {
|
| 44 |
+
"1": {
|
| 45 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>LLM-dCache achieves latency reductions across models and prompting techniques with no degradation in overall agentic performance, as agent metrics are within established variance bounds\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2406.06799v2#bib.bib20\" title=\"\">20</a>]</cite>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.16.16\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.1.1.1.2.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.1.1.1.3.1\">LLM-dCache</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.4\">Success</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.5\">Correctness</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.6\">Obj. Det</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.7\">LCC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.8\">VQA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.9\">Avg Tokens</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.10\">Avg Time</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.1.1.1.1.1\">Speedup </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.2.2.2.1\">Rate (%) \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.3.3.3.2\">Rate (%) \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.4.4.4.3\">F1 (%) \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.5.5.4\">R (%) \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.6.6.6.5\">Rouge-L \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.7.7.7.6\">/ Task \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.8.8.8.7\">/ Task (s) \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.17.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" colspan=\"10\" id=\"S3.T1.16.16.17.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.16.16.17.1.1.1\">GPT-3.5 Turbo</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.18.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.18.2.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.18.2.1.1\">CoT - Zero-Shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.18.2.2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.18.2.2.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.18.2.3\">49.45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.18.2.4\">38.47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.18.2.5\">70.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.18.2.6\">70.19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.18.2.7\">56.62</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.18.2.8\">25.23k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.18.2.9\">6.96</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.18.2.10\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.9.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.9.9.9.2\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.9.9.9.2.1\" style=\"color:#00FF00;background-color:#D9ECEC;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.9.9.9.3\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.9.9.9.3.1\" style=\"background-color:#D9ECEC;\">49.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.9.9.9.4\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.9.9.9.4.1\" style=\"background-color:#D9ECEC;\">37.96</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.9.9.9.5\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.9.9.9.5.1\" style=\"background-color:#D9ECEC;\">69.71</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.9.9.9.6\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.9.9.9.6.1\" style=\"background-color:#D9ECEC;\">71.23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.9.9.9.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.9.9.9.7.1\" style=\"background-color:#D9ECEC;\">55.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.9.9.9.8\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.9.9.9.8.1\" style=\"background-color:#D9ECEC;\">25.55k</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.9.9.9.9\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.9.9.9.9.1\" style=\"background-color:#D9ECEC;\">5.67</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.9.9.9.1\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.9.9.9.1.1\" style=\"background-color:#D9ECEC;\">1.23 </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.19.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.19.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.19.3.1.1\">CoT - Few-Shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.19.3.2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.19.3.2.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.19.3.3\">54.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.19.3.4\">70.50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.19.3.5\">89.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.19.3.6\">82.19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.19.3.7\">62.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.19.3.8\">30.81k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.19.3.9\">6.52</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.19.3.10\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.10.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.10.10.2\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.10.10.10.2.1\" style=\"color:#00FF00;background-color:#D9ECEC;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.10.10.3\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.10.10.10.3.1\" style=\"background-color:#D9ECEC;\">54.07</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.10.10.4\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.10.10.10.4.1\" style=\"background-color:#D9ECEC;\">69.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.10.10.5\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.10.10.10.5.1\" style=\"background-color:#D9ECEC;\">88.12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.10.10.6\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.10.10.10.6.1\" style=\"background-color:#D9ECEC;\">81.31</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.10.10.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.10.10.10.7.1\" style=\"background-color:#D9ECEC;\">62.08</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.10.10.8\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.10.10.10.8.1\" style=\"background-color:#D9ECEC;\">30.02k</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.10.10.9\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.10.10.10.9.1\" style=\"background-color:#D9ECEC;\">5.29</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.10.10.1\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.10.10.10.1.1\" style=\"background-color:#D9ECEC;\">1.23 </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.20.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.20.4.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.20.4.1.1\">ReAct - Zero-Shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.20.4.2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.20.4.2.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.20.4.3\">50.85</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.20.4.4\">70.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.20.4.5\">87.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.20.4.6\">89.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.20.4.7\">61.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.20.4.8\">27.09k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.20.4.9\">7.29</td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.20.4.10\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.11.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.11.11.11.2\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.11.11.11.2.1\" style=\"color:#00FF00;background-color:#D9ECEC;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.11.11.11.3\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.11.11.11.3.1\" style=\"background-color:#D9ECEC;\">50.47</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.11.11.11.4\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.11.11.11.4.1\" style=\"background-color:#D9ECEC;\">68.91</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.11.11.11.5\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.11.11.11.5.1\" style=\"background-color:#D9ECEC;\">80.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.11.11.11.6\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.11.11.11.6.1\" style=\"background-color:#D9ECEC;\">89.31</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.11.11.11.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.11.11.11.7.1\" style=\"background-color:#D9ECEC;\">60.78</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.11.11.11.8\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.11.11.11.8.1\" style=\"background-color:#D9ECEC;\">27.65k</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.11.11.11.9\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.11.11.11.9.1\" style=\"background-color:#D9ECEC;\">5.47</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.11.11.11.1\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.11.11.11.1.1\" style=\"background-color:#D9ECEC;\">1.33 </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.21.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.21.5.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.21.5.1.1\">ReAct - Few-Shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.21.5.2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.21.5.2.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.21.5.3\">63.45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.21.5.4\">71.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.21.5.5\">82.59</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.21.5.6\">92.36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.21.5.7\">69.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.21.5.8\">34.40k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.21.5.9\">6.64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.21.5.10\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.12.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.12.12.2\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.12.12.12.2.1\" style=\"color:#00FF00;background-color:#D9ECEC;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.12.12.3\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.12.12.12.3.1\" style=\"background-color:#D9ECEC;\">63.14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.12.12.4\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.12.12.12.4.1\" style=\"background-color:#D9ECEC;\">69.17</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.12.12.5\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.12.12.12.5.1\" style=\"background-color:#D9ECEC;\">81.19</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.12.12.6\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.12.12.12.6.1\" style=\"background-color:#D9ECEC;\">88.41</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.12.12.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.12.12.12.7.1\" style=\"background-color:#D9ECEC;\">65.76</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.12.12.8\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.12.12.12.8.1\" style=\"background-color:#D9ECEC;\">34.86k</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.12.12.9\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.12.12.12.9.1\" style=\"background-color:#D9ECEC;\">5.77</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.12.12.1\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.12.12.12.1.1\" style=\"background-color:#D9ECEC;\">1.15 </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.22.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_tt\" colspan=\"10\" id=\"S3.T1.16.16.22.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.16.16.22.6.1.1\">GPT-4 Turbo</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.23.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.23.7.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.23.7.1.1\">CoT - Zero-Shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.23.7.2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.23.7.2.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.23.7.3\">70.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.23.7.4\">82.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.23.7.5\">86.34</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.23.7.6\">84.91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.23.7.7\">69.78</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.23.7.8\">26.81k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.23.7.9\">6.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.23.7.10\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.13.13.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.13.13.13.2\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.13.13.13.2.1\" style=\"color:#00FF00;background-color:#D9ECEC;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.13.13.13.3\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.13.13.13.3.1\" style=\"background-color:#D9ECEC;\">70.08</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.13.13.13.4\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.13.13.13.4.1\" style=\"background-color:#D9ECEC;\">82.25</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.13.13.13.5\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.13.13.13.5.1\" style=\"background-color:#D9ECEC;\">87.64</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.13.13.13.6\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.13.13.13.6.1\" style=\"background-color:#D9ECEC;\">84.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.13.13.13.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.13.13.13.7.1\" style=\"background-color:#D9ECEC;\">70.14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.13.13.13.8\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.13.13.13.8.1\" style=\"background-color:#D9ECEC;\">26.91k</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.13.13.13.9\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.13.13.13.9.1\" style=\"background-color:#D9ECEC;\">5.16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.13.13.13.1\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.13.13.13.1.1\" style=\"background-color:#D9ECEC;\">1.32 </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.24.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.24.8.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.24.8.1.1\">CoT - Few-Shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.24.8.2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.24.8.2.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.24.8.3\">72.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.24.8.4\">84.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.24.8.5\">83.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.24.8.6\">97.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.24.8.7\">72.15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.24.8.8\">28.49k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.24.8.9\">6.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.24.8.10\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.14.14.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.14.14.2\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.14.14.14.2.1\" style=\"color:#00FF00;background-color:#D9ECEC;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.14.14.3\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.14.14.14.3.1\" style=\"background-color:#D9ECEC;\">72.16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.14.14.4\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.14.14.14.4.1\" style=\"background-color:#D9ECEC;\">85.48</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.14.14.5\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.14.14.14.5.1\" style=\"background-color:#D9ECEC;\">82.95</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.14.14.6\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.14.14.14.6.1\" style=\"background-color:#D9ECEC;\">99.72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.14.14.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.14.14.14.7.1\" style=\"background-color:#D9ECEC;\">72.72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.14.14.8\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.14.14.14.8.1\" style=\"background-color:#D9ECEC;\">28.92k</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.14.14.9\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.14.14.14.9.1\" style=\"background-color:#D9ECEC;\">5.09</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.14.14.14.1\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.14.14.14.1.1\" style=\"background-color:#D9ECEC;\">1.33 </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.25.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.25.9.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.25.9.1.1\">ReAct - Zero-Shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.25.9.2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.25.9.2.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.25.9.3\">74.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.25.9.4\">85.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.25.9.5\">88.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.25.9.6\">94.52</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.25.9.7\">72.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.25.9.8\">30.51k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.25.9.9\">6.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.25.9.10\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.15.15.15.2\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.15.15.15.2.1\" style=\"color:#00FF00;background-color:#D9ECEC;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.15.15.15.3\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.15.15.15.3.1\" style=\"background-color:#D9ECEC;\">74.70</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.15.15.15.4\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.15.15.15.4.1\" style=\"background-color:#D9ECEC;\">85.46</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.15.15.15.5\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.15.15.15.5.1\" style=\"background-color:#D9ECEC;\">89.27</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.15.15.15.6\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.15.15.15.6.1\" style=\"background-color:#D9ECEC;\">92.89</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.15.15.15.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.15.15.15.7.1\" style=\"background-color:#D9ECEC;\">71.85</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.15.15.15.8\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.15.15.15.8.1\" style=\"background-color:#D9ECEC;\">30.45k</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.15.15.15.9\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.15.15.15.9.1\" style=\"background-color:#D9ECEC;\">5.70</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.15.15.15.1\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.15.15.15.1.1\" style=\"background-color:#D9ECEC;\">1.17 </span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.26.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.26.10.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.26.10.1.1\">ReAct - Few-Shot</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.26.10.2\"><span class=\"ltx_text\" id=\"S3.T1.16.16.26.10.2.1\" style=\"color:#FF0000;\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.26.10.3\">76.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.26.10.4\">85.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.26.10.5\">64.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.26.10.6\">98.95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.26.10.7\">74.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.26.10.8\">36.62k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.26.10.9\">6.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.26.10.10\">\u2013</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.16.16.16.2\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.16.16.16.2.1\" style=\"color:#00FF00;background-color:#D9ECEC;\">\u2713</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.16.16.16.3\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.16.16.16.3.1\" style=\"background-color:#D9ECEC;\">76.28</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.16.16.16.4\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.16.16.16.4.1\" style=\"background-color:#D9ECEC;\">85.46</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.16.16.16.5\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.16.16.16.5.1\" style=\"background-color:#D9ECEC;\">65.17</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.16.16.16.6\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.16.16.16.6.1\" style=\"background-color:#D9ECEC;\">99.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.16.16.16.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.16.16.16.7.1\" style=\"background-color:#D9ECEC;\">74.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.16.16.16.8\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.16.16.16.8.1\" style=\"background-color:#D9ECEC;\">36.68k</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.16.16.16.9\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.16.16.16.9.1\" style=\"background-color:#D9ECEC;\">5.72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.16.16.16.1\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T1.16.16.16.1.1\" style=\"background-color:#D9ECEC;\">1.17 </span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 46 |
+
"capture": "TABLE I: LLM-dCache achieves latency reductions across models and prompting techniques with no degradation in overall agentic performance, as agent metrics are within established variance bounds\u00a0[20]."
|
| 47 |
+
},
|
| 48 |
+
"2": {
|
| 49 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Zero-shot CoT (GPT-3.5 Turbo) runtime shows that overall latency reduction is highly dependent on data reuse rates. At high reuse, we observe only slight variability among different cache policies.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_rr ltx_border_t\" id=\"S3.T2.1.1.2.1.1\">Cache Policy</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_rr ltx_border_t\" id=\"S3.T2.1.1.2.1.2\">No Cache</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_rr ltx_border_t\" colspan=\"5\" id=\"S3.T2.1.1.2.1.3\">LRU</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.2.1.4\">LFU</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.2.1.5\">RR</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.2.1.6\">FIFO</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_rr ltx_border_t\" id=\"S3.T2.1.1.3.1.1\">Data Reuse Rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S3.T2.1.1.3.1.2\">\u2013</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.3\">0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.4\">20%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.5\">40%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.6\">60%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S3.T2.1.1.3.1.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.3.1.7.1\" style=\"background-color:#D9ECEC;\">80%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.8\">80%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.9\">80%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.3.1.10\">80%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_rr ltx_border_t\" id=\"S3.T2.1.1.1.1\">Avg Time/Task (s) \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr ltx_border_t\" id=\"S3.T2.1.1.1.2\">5.81</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.3\">5.81</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.4\">5.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.5\">5.62</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.6\">5.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_rr ltx_border_t\" id=\"S3.T2.1.1.1.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S3.T2.1.1.1.7.1\" style=\"background-color:#D9ECEC;\">4.92</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.8\">5.16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.9\">5.36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.10\">5.25</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 50 |
+
"capture": "TABLE II: Zero-shot CoT (GPT-3.5 Turbo) runtime shows that overall latency reduction is highly dependent on data reuse rates. At high reuse, we observe only slight variability among different cache policies."
|
| 51 |
+
},
|
| 52 |
+
"3": {
|
| 53 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>GPT-driven cache operations produce performance metrics and latency very similar to programmatic implementation of caching, demonstrating GPT\u2019s ability to successfully execute system optimization tasks.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.8.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8.9.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T3.8.8.9.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.8.8.9.1.1.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.2\">Cache</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_rr ltx_border_t\" id=\"S4.T3.8.8.9.1.3\">Policy</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.4\">Cache Hit</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.5\">Success</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.6\">Correctness</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.7\">Obj. Det</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.8\">LCC</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.9\">VQA</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.10\">Avg Tokens</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.9.1.11\">Avg Time</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S4.T3.8.8.8.9\">Read</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_rr\" id=\"S4.T3.8.8.8.10\">Imp.</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T3.1.1.1.1\">Rate (%) \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T3.2.2.2.2\">Rt (%) \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T3.3.3.3.3\">Rt (%) \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T3.4.4.4.4\">F1 (%) \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T3.5.5.5.5\">R (%) \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T3.6.6.6.6\">Rouge-L \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T3.7.7.7.7\">/Task \n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T3.8.8.8.8\">/ Task (s) \n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8.10.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T3.8.8.10.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.10.1.2\">Python</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S4.T3.8.8.10.1.3\">Python</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.10.1.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.10.1.5\">72.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.10.1.6\">85.40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.10.1.7\">85.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.10.1.8\">99.46</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.10.1.9\">72.64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.10.1.10\">28.76k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.8.8.10.1.11\">5.07</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8.11.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr\" id=\"S4.T3.8.8.11.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.11.2.1.1\">GPT-4 Turbo</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.8.8.11.2.2\">GPT-4</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr\" id=\"S4.T3.8.8.11.2.3\">Python</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.11.2.4\">96.59</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.11.2.5\">72.16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.11.2.6\">85.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.11.2.7\">83.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.11.2.8\">98.69</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.11.2.9\">72.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.11.2.10\">28.73k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.11.2.11\">5.11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8.12.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr\" id=\"S4.T3.8.8.12.3.1\">CoT - Few-Shot</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T3.8.8.12.3.2\">Python</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_rr\" id=\"S4.T3.8.8.12.3.3\">GPT-4</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.12.3.4\">97.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.12.3.5\">72.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.12.3.6\">84.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.12.3.7\">82.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.12.3.8\">99.59</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.12.3.9\">72.09</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.12.3.10\">28.64k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.8.8.12.3.11\">5.09</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8.13.4\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_rr\" id=\"S4.T3.8.8.13.4.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.13.4.2\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S4.T3.8.8.13.4.2.1\" style=\"background-color:#D9ECEC;\">GPT-4</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_rr\" id=\"S4.T3.8.8.13.4.3\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S4.T3.8.8.13.4.3.1\" style=\"background-color:#D9ECEC;\">GPT-4</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.13.4.4\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S4.T3.8.8.13.4.4.1\" style=\"background-color:#D9ECEC;\">96.16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.13.4.5\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S4.T3.8.8.13.4.5.1\" style=\"background-color:#D9ECEC;\">72.16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.13.4.6\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S4.T3.8.8.13.4.6.1\" style=\"background-color:#D9ECEC;\">85.48</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.13.4.7\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S4.T3.8.8.13.4.7.1\" style=\"background-color:#D9ECEC;\">82.95</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.13.4.8\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S4.T3.8.8.13.4.8.1\" style=\"background-color:#D9ECEC;\">99.72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.13.4.9\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S4.T3.8.8.13.4.9.1\" style=\"background-color:#D9ECEC;\">72.72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.13.4.10\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S4.T3.8.8.13.4.10.1\" style=\"background-color:#D9ECEC;\">28.92k</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.8.8.13.4.11\" style=\"background-color:#D9ECEC;\"><span class=\"ltx_text\" id=\"S4.T3.8.8.13.4.11.1\" style=\"background-color:#D9ECEC;\">5.09</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 54 |
+
"capture": "TABLE III: GPT-driven cache operations produce performance metrics and latency very similar to programmatic implementation of caching, demonstrating GPT\u2019s ability to successfully execute system optimization tasks."
|
| 55 |
+
}
|
| 56 |
+
},
|
| 57 |
+
"image_paths": {},
|
| 58 |
+
"validation": true,
|
| 59 |
+
"references": [],
|
| 60 |
+
"url": "http://arxiv.org/html/2406.06799v2"
|
| 61 |
+
}
|
20240921/2406.11802v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2406.16272v2.json
ADDED
|
@@ -0,0 +1,495 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Repairing Catastrophic-Neglect in Text-to-Image Diffusion Models via Attention-Guided Feature Enhancement",
|
| 3 |
+
"abstract": "Text-to-Image Diffusion Models (T2I DMs) have garnered significant attention for their ability to generate high-quality images from textual descriptions.\nHowever, these models often produce images that do not fully align with the input prompts, resulting in semantic inconsistencies.\nThe most prominent issue among these semantic inconsistencies is catastrophic-neglect, where the images generated by T2I DMs miss key objects mentioned in the prompt.\nWe first conduct an empirical study on this issue, exploring the prevalence of catastrophic-neglect, potential mitigation strategies with feature enhancement, and the insights gained.\nGuided by the empirical findings, we propose an automated repair approach named Patcher to address catastrophic-neglect in T2I DMs.\nSpecifically, Patcher first determines whether there are any neglected objects in the prompt, and then applies attention-guided feature enhancement to these neglected objects, resulting in a repaired prompt.\nExperimental results on three versions of Stable Diffusion demonstrate that Patcher effectively repairs the issue of catastrophic-neglect, achieving 10.1%-16.3% higher Correct Rate in image generation compared to baselines.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Text-to-Image Diffusion Models (T2I DMs) Rombach et al. (2022 ###reference_b27###); Saharia et al. (2022 ###reference_b28###); Ramesh et al. (2022a ###reference_b25###) have gained widespread attention in recent years due to their remarkable ability to generate images from textual descriptions (i.e. prompt).\nHowever, it has been demonstrated that the image generated by T2I DMs may not strictly adhere to the description of the input prompt, leading to inconsistencies in the semantics.\nTo this end, many approaches have been proposed to enhance the generation quality through inference process optimization Liu et al. (2022 ###reference_b15###); Feng et al. (2023 ###reference_b10###); Chefer et al. (2023 ###reference_b8###) and hand-crafted prompt writing guidelines Liu and Chilton (2022 ###reference_b17###); Oppenlaender (2022 ###reference_b23###).\nThe former requires modifications to the model structure or parameters, which is difficult for users to perform.\nAlthough the latter is relatively easier to implement, it requires a significant amount of manual effort and suffers poor scalability.\nRecently, Hao et al. (2023 ###reference_b11###) also proposed a method to enhance the quality of generated images by automating the refinement of user-inputted prompts.\n###figure_1### According to previous study Chefer et al. (2023 ###reference_b8###), one of the most prominent issues in semantic consistency is the catastrophic-neglect, i.e., the images generated by T2I DMs often miss some of the key objects mentioned in the textual prompts.\nThis issue is particularly prevalent when a prompt involves multiple objects.\nFigure 1 ###reference_### demonstrates two illustrative cases where one of the two objects is neglected by T2I DMs.\nIn Figure 1 ###reference_### (a), we notice that the object \u201cbicycle\u201d in prompt is described with the explicit feature \u201ctwo-wheeled\u201d while \u201cdonut\u201d is not.\nWe try to craft prompts to repair the issue, and results reveal that by adding a specific explicit feature to the \u201cdonut\u201d (e.g., \u201chollow-centered\u201d), the catastrophic-neglect issue can be resolved.\nFurthermore, as the feature is added, the attention difference between the two mentioned objects (i.e., \u201cbicycle\u201d and \u201cdonut\u201d) is reduced according to the explainable tool Tang et al. (2023 ###reference_b30###).\nIt seems that reduction in attention difference can potentially indicate the T2I DMs put more balanced attention towards the two involved objects, resulting both of them can be successfully generated.\nIn Figure 1 ###reference_### (b), we notice that the object \u201cbird\u201d in the prompt is a more general concept with fewer implicit features compared with the concept \u201cgiraffe\u201d, according to the hierarchical structure in WordNet Miller (1995 ###reference_b21###).\nTaken in this sense, we can successfully repair the issue through using more imageable concept (such as \u201ceagle\u201d ) to replace \u201cbird\u201d in the prompt, and the attention difference between two mentioned objects (i.e., \u201ceagle\u201d and \u201cgiraffe\u201d) is also reduced.\nMotivational study in Section 2 ###reference_### provides more details.\nMotivated by the above analysis, we assume the attention difference can guide the mitigation of catastrophic-neglect issue, and this can be achieved through enhancing objects with specific features (i.e., explicit features) or using more imageable concepts (i.e., implicit features) to balance the attention among involved objects in the prompt.\nTherefore, this paper proposes an automatic repair approach named Patcher to address catastrophic-neglect in T2I DMs, guided by the attention difference among objects of input prompt.\nSpecifically, Patcher first parses the original prompt and identifies the objects neglected by the T2I DMs.\nThen, guided by the difference of attention scores, Patcher produces the repaired prompt via enhancing explicit feature (achieved by asking LLMs for suitable modifiers) and implicit features (realized by hyponym substitution using WordNet), and re-determined whether there are still neglected objects in the generated image.\nExperimental results demonstrate that Patcher effectively repairs the issue of catastrophic-neglect in T2I DMs, achieving 10.1%-16.3% higher Correct Rate in image generation compared to baselines, as tested on Stable-Diffusion V1.4, V1.5, and V2.1 models.\nAdditionally, ablation study shows that both explicit and implicit feature enhancing in Patcher contribute to resolving the catastrophic-neglect issue in T2I DMs. We provide the public reproduction package111https://github.com/lsplx/patcher."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Motivation",
|
| 15 |
+
"text": "To better understand catastrophic-neglect and guide the design of the automated repair approach, we conduct the empirical analysis from three aspects, i.e., their prevalence across prompts with different number of objects, potential mitigation strategies based on feature enhancement, and corresponding insights into the effectiveness of feature enhancement."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Issue Prevalence",
|
| 21 |
+
"text": "On the one side, we investigate the error rate of T2I DMs in handling prompts involving different numbers of objects through manual evaluation.\nOn the other side, we explore the proportion of catastrophic-neglect among all errors.\nFirst, we construct three datasets containing single-object, double-object and triple-object prompts respectively.\nFor the single-object prompts, we reuse the 80 object descriptions from different semantic categories in MSCOCO dataset Lin et al. (2014 ###reference_b14###)\nBased on these single-object prompts, we synthesize new prompts containing two or three objects using GPT-3.5 by adding essential conjunctions, adverbs or interactions\n, aiming to generate inputs for T2I DMs that conform to human expressions222The size of the dataset is described in Sections 4.1 ###reference_###..\nWe then input the single-object prompts and multi-object prompts into Stable Diffusion V2.1, a state-of-the-art T2I DM, and manually evaluate the proportion of incorrectly generated images that are not consistent with the prompt (i.e. Error Rate).\nThe evaluation results show that the Error Rate significantly increases (2.5%->50.4%->86.0%) with the numbers of objects in the prompt.\nFurthermore, for the prompts with single, double and triple object, catastrophic-neglect issue accounts for 100%, 93.4%, and 94.0% of all the incorrectly generated images.\nThe remaining incorrectly images are those where the features of multiple objects are blended into a single object.\nIn general, when faced with multi-object prompts, the T2I DM is prone to generating incorrect images, with catastrophic-neglect being the most severe issue in such scenario."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Issue Mitigation via Feature Enhancement",
|
| 27 |
+
"text": "Section 1 ###reference_### has illustratively demonstrated that the imbalance of explicit/implicit features carried by objects in the prompts may lead to the catastrophic-neglect.\nThis section tries to craft the prompts with the idea of adding explicit or implicit features to those neglected objects to investigate whether the issue could be mitigated statistically.\nSpecifically, we apply feature enhancement to double-object and triple-object datasets (Constructed in Section 2.1 ###reference_### with 4041 prompts).\nFirst, we manually add explicit features to the neglected objects.\nThese features enhance the physical appearance of the original objects without altering the semantic meaning of the original prompts.\nAs shown in Figure 1 ###reference_###, the neglected object \u201cdonut\u201d was enhanced with the feature \u201chollow-centered\u201d.\nSecond, we enhance the prompts using implicit features.\nWe manually replace the description of the neglected object with its hyponym with help of WordNet, which denotes a specific concept compared to the original object Miller (1995 ###reference_b21###).\nAs shown in Figure 1 ###reference_###, we replaced \u201cbird\u201d with \u201ceagle\u201d to obtain the repaired prompt.\nThe evaluation results show that, compared to the Error Rate before feature enhancement, manually constructed explicit and implicit features reduce Stable Diffusion\u2019s Error Rate by 26.9% and 24.6%, respectively."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Explanation for Feature Enhancement",
|
| 33 |
+
"text": "To explore the reasons behind feature enhancement, we use the attention explainability tool (DAAM) Tang et al. (2023 ###reference_b30###) to investigate whether the attention differences between multiple objects change before and after feature enhancement.\nGiven a specific token from the input prompt, DAAM aggregates the T2I DM\u2019s cross-attention values across layers to obtain its attention score.\nThe attention score of each token represents the token\u2019s importance in the image generation process.\nThe attention difference indicates the disparity in the T2I DM\u2019s attention score to different object tokens.\nWe assume that reducing the attention difference between multiple objects can help the T2I DM more evenly focus on the features of each object and generate them correctly.\nFor double-object prompts, we compute the absolute difference in attention scores between the two objects.\nFor prompts with triple object, we first calculate the pairwise differences in attention scores and then average them.\nWe use the prompts that generates incorrectly images from multi-objects prompts constructed in Section 2.1 ###reference_### and the repaired prompts manually constructed in Section 2.2 ###reference_###.\nThe result is shown in Table 1 ###reference_###. The attention difference between multiple objects significantly decreases for prompts that correctly generate images after enhancing explicit or implicit features. Besides, this reduction in attention difference accounts for 80.9% of the correctly generated images.\nIn contrast, for prompts that still generate incorrect images, the attention difference increases. Moreover, the reduction in attention difference accounts for 29.0% of these incorrect generated images.\nThis indicates that features reducing the attention difference between objects are more effective in repairing the catastrophic-neglect in the T2I DM."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Methodology",
|
| 39 |
+
"text": "###figure_2### Figure 2 ###reference_### shows the overview of Patcher.\nPatcher consists of two stages: (1) Neglected Objects Identification would determine whether the T2I DM neglect any objects in the input prompt; (2) Feature Enhancement for Neglected Objects would enhance explicit and implicit features for neglected objects and construct the repaired prompt."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Neglected Objects Identification",
|
| 45 |
+
"text": "To identify the neglected objects, Patcher first extracts the objects from the input prompt.\nSpecifically, Patcher first parses the textual descriptions into a dependency tree using a transformer-based Vaswani et al. (2017 ###reference_b31###) language model333https://huggingface.co/spacy/en_core_web_trf.\nIt then extracts noun phrases from this tree as the object entities.\nIn the meanwhile, Patcher employs DAAM to obtain the attention scores and produces the token-attention pairs (TAP) for each token in the prompt description, which will be utilized in Section 3.2 ###reference_### to guide the feature enhancement.\nAfter that, Patcher calculates the similarities of each extracted object entities and generated images by Clipscore Radford et al. (2021 ###reference_b24###).\nDue to the presence of corresponding visual features when the object is in the image and their absence when it is not, there is a significant difference in similarity between the two scenarios,\nPatcher sets a threshold based on the empirical study to determine whether an object is neglected in the image.\nIf the similarity between the object and the image is below the threshold, we consider the object to be neglected by the T2I model.\nConversely, we consider the object to be correctly generated by the T2I model.\nIf there are no neglected objects in the prompt, output the current prompts; otherwise,\nPatcher sends the prompt into the following stage for repair."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Feature Enhancement for Neglected Objects",
|
| 51 |
+
"text": "After the first stage, Patcher derives a set of neglected objects and a set of correctly identified objects. Recall that it also obtains the attention scores for each token in the first stage.\nTypically, an object contains a single token; if it contains multiple tokens, Patcher calculates the average of the attention scores for these tokens.\nIn this way, we obtain the attention score for each neglected object and correct object.\nWe then calculate the differences in attention scores between neglected objects and correct objects.\nSpecifically, it first calculates the pairwise differences between attention scores of objects from the neglected and correct object sets, and then averages these differences.\nThis provides a comprehensive measure of how uniformly the T2I DM\u2019s attention is distributed between two set of objects.\nThe calculation process is shown in Equation 1 ###reference_###, where denotes the attention score corresponding to the i-th object from the neglected object set, and denotes the attention score corresponding to the j-th object from the correct object set.\nNext, Patcher employs two repair strategies: 1) Explicit Feature Enhancing, which is used to obtain the physical features of the neglected objects; 2) Implicit Feature Enhancing, which is used to obtain hyponyms of the neglected objects guided by the attention difference.\nWith the two strategies, Patcher simultaneously generates explicit and implicit features, each forming a repaired prompt, which together constitute two\nrepaired prompts to determine whether there are neglected objects in them.\nFollowing introduces the prompt repair process with the two strategies respectively."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2.1",
|
| 55 |
+
"parent_section_id": "3.2",
|
| 56 |
+
"section_name": "3.2.1 Explicit Feature Enhancement",
|
| 57 |
+
"text": "From the explicit perspective, objects\u2019 features are enhanced from two aspects, i.e., shape and color, leveraging the LLM\u2019s powerful understanding of the general knowledge Chang et al. (2024 ###reference_b6###) with the carefully designed prompt (See Appendix A ###reference_### for specific details).\nThe prompt consists of three parts: 1) the specific question, which directly asks the LLM about the core objective,\n2) the output guidelines, which constrain the format of the model\u2019s output and guide it to produce diverse responses, and 3) the example, which helps the LLM understand the question and produce the response expected by the users.\nAs shown in Figure 2 ###reference_###, Patcher inputs the explicit feature prompts into the LLM444The LLM is GPT-3.5, which in return provides a variety set of explicit features.\nFor each explicit feature, Patcher replaces the description of the neglected object in the original prompt with an enhanced description containing the object and its explicit feature, generating a candidate prompt.\nPatcher iteratively queries the T2I models with the candidate prompts until no neglected objects (determined with the strategy in Section 3.1 ###reference_###) or reaching the maximum iteration number (set as 4 in our study).\nIf all color and shape explicit features fail to make the neglected object visible in the image, Patcher selects the feature with the smallest attention difference from both candidate sets of color and shape, then combines them to generate the final repaired prompt."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2.2",
|
| 61 |
+
"parent_section_id": "3.2",
|
| 62 |
+
"section_name": "3.2.2 Implicit Feature Enhancement",
|
| 63 |
+
"text": "To obtain the hyponyms of a neglected object, Patcher uses Natural Language Processing tool Bird et al. (2009 ###reference_b3###) to search all hyponyms of the object, i.e., including the direct hyponyms and those indirect hyponyms, recursively, until no further hyponyms are found.\nAs shown in the hyponym tree in Figure 2 ###reference_###, Patcher constructs a hyponym tree for \u201cbicycle\u201d, where the child node \u201cmountain bike\u201d is a direct hyponyms of \u201cbicycle\u201d.\nFor nodes at the same hierarchical level, such as \u201cmountain bike\u201d and \u201croad bike\u201d,\ntheir conceptual levels are similar, making them sibling nodes.\nBesides, the child nodes of \u201cmountain bike\u201d are indirect hyponyms of \u201cbicycle\u201d.\nAmong these, some indirect hyponyms such as \u201cSuspension Fork\u201d have already deviated from the original semantic concept of the root node \u201cbicycle\u201d, which could not help the T2I DM generate correct original object.\nTo mitigate this issue, Patcher performs semantic-based pruning for the hyponym tree.\nSpecifically, by traversing each child node of the hyponym tree using breadth-first search, Patcher maps the textual representation of the current node object and neglected object into a vector space using a language model Brown et al. (2020 ###reference_b5###), then computes the cosine similarity between them.\nIf the similarity is below a certain threshold, Patcher prunes the current node and its children.\nAfter that, Patcher performs an attention-guided search on the pruned hyponym tree, as detailed in Algorithm 1 ###reference_###.\nFor each node, Patcher first replaces the neglected object in the original prompt with the hyponym represented by that node (Line 2-4).\nThen, input the generated repaired prompt into the Neglected Objects Identification Stage to judge whether the neglected object still exists.\nIf there are no neglected objects, output the repaired prompt (Line 5-8); otherwise, proceed with the attention-guided search (Line 9-15).\nSpecifically, Patcher calculates the attention difference between the replaced hyponym and the correct objects in the repaired prompt, then compares it with the original attention difference between neglected object and correct objects.\nConsidering that child nodes contains more implicit features compared to sibling nodes, if the attention difference is reduced, Patcher continues the search with the child nodes of the current node; otherwise, search its sibling nodes.\nDuring the process of explicit and implicit feature enhancement, if a correct image is generated, the corresponding repaired prompt is returned.\nOtherwise, the prompt that achieves the minimum attention difference is returned."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Experimental Setup",
|
| 69 |
+
"text": ""
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.1",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Datasets",
|
| 75 |
+
"text": "For experimental evaluation, we first introduce the popularly used datasets constructed by HILA et al. Chefer et al. (2023 ###reference_b8###) for T2I task.\nGiven that publicly available datasets only involve prompts with double objects combined by an \u201cand\u201d relationship, we further based on some of the 80 single objects in MSCOCO with the help of LLM (same as the datasets in Section 2.1 ###reference_###).\nFollowings introduces the details of the datasets.\nTemplate-Based Pairs (TBP): It is the public dataset constructed by HILA et al. Chefer et al. (2023 ###reference_b8###) used for T2I task.\nAll the prompts in the dataset contain two objects that are constructed by three templates, i.e.,\n\u201ca [animalA] and a [animalB]\u201d, \u201ca [animal] and a [color][object]\u201d, and \u201ca [colorA][objectA] and a [colorB][objectB]\u201d.\nThe placeholders in the templates are filled with 12 types of animals, 12 objects and 11 colors.\nTwo/Three-Object Prompts (TwOP/ThreeOP): The detailed construction of our two datasets can be found in Section 2.1 ###reference_###.\nThe constructed datasets contain 3,160 prompts with two objects and the same number of prompts with three objects."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.2",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Subject Models",
|
| 81 |
+
"text": "To investigate the performance of Patcher in repairing catastrophic-neglect issue.\nwe introduce three T2I DMs (Stable Diffusion V1.4 (SD V1.4), Stable Diffusion V1.5 (SD V1.5), and Stable Diffusion V2.1 (SD V2.1)) for their wide adoption in community. All models are run on a 3090 GPU with 24GB of VRAM."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.3",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "Evaluation Metric and Measurement Method",
|
| 87 |
+
"text": "We adopt two evaluation metrics.\nCLIPScore: it measures the similarity between the input prompt and generated image, and is used in many previous studies Hao et al. (2023 ###reference_b11###); Chefer et al. (2023 ###reference_b8###).\nHowever, it serves as a weaker indication of image-text similarity in T2I task, as correctness of generated images cannot be absolutely determined directly based on the magnitude of the value.\nCorrect Rate (CR): the percentage of correctly generated images out of all generated images.\nCompared to CLIPScore, CR is a direct measurement indicating whether a generated image is correct.\nFor an image generated by T2I models, we manually judge whether it is correct by a annotation team consisting of one senior researcher and two Ph.D students.\nIf more than half of the members perceive the generated image to be semantically consistent with the input prompt, we consider it as a correctly generated one."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.4",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "Baselines",
|
| 93 |
+
"text": "Our baselines include approaches based on prompt optimization (Promptist) and inference process optimization (AE).\nBesides above two baselines specific for T2I DMs, we have also specifically established a baseline that iteratively refines the output results through iterative queries (LR), which is a commonly-used strategy for performance improvement in the LLM context Chao et al. (2023 ###reference_b7###); Mehrotra et al. (2023 ###reference_b20###).\nPromptist Hao et al. (2023 ###reference_b11###)\nis the state-of-the-art approach to improve the generation quality of T2I DMs via prompt optimization.\nIt first performs supervised fine-tuning with a pretrained language model on a small collection of manually engineered prompts. Then it defines a reward function that encourages the T2I DM to generate more aesthetically pleasing images while preserving the original prompt intentions. After that, it uses reinforcement learning with the reward function to further boosts performance of the fine-tuned model.\nAttend-and-Excite (AE) Chefer et al. (2023 ###reference_b8###)\nis the state-of-the-art approach specific for catastrophic-neglect in T2I DMs via inference process optimization.\nSpecifically, it adds an attention guidance mechanism during the model\u2019s inference stage to enhance the cross-attention units.\nThis mechanism ensures that the model attends to all object tokens in the text prompts and boosts their activations, thereby encouraging the model to generate all objects described in the text prompts.\nHowever, AE requires prior knowledge of the positions of object tokens in the original prompts.\nFor the input prompts, we use the object extraction method in Patcher to identify and return the positions of the objects within the prompts.\nFinally, the prompts, along with the positional information of the objects, are fed into the T2I DM enhanced by AE to generate optimized images.\nLLM-Repair (LR)\nimproves the quality of the generated images by the iterative query strategy that is commonly employed in practice to improve the outputs in the LLM context.\nSpecifically, with the original prompt, LR first identifies the neglected objects in the generated employing the first stage in Patcher.\nAfter that, LR leverages GPT-3.5 to produce the new prompt describing the details of the neglected objects and asking for the T2I model to mitigate the catastrophic-neglect as much as possible in the next iteration (the prompt templates in shown in Appendix B ###reference_###).\nThen, LR iteratively query the T2I models until no object is identified as neglected one or reaching the maximum iteration number (set as 8 in our study)."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Results",
|
| 99 |
+
"text": "We designed two sets of experiments to explore the performance of Patcher in repairing catastrophic-neglect: the effectiveness of Patcher and the ablation study within Patcher."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.1",
|
| 103 |
+
"parent_section_id": "5",
|
| 104 |
+
"section_name": "Effectiveness of Patcher",
|
| 105 |
+
"text": "Table 2 ###reference_### shows the effectiveness of Patcher and baselines in CR and CLIPScore.\nThe column \u201cOriginal\u201d represents the quality of the images generated by different T2I DMs with the original prompts in the three datasets.\nThe last four columns show the quality of the generated images after repair for three baselines and Patcher respectively.\nFrom the perspective of CR, Patcher achieves the best performance across all T2I models under testing and datasets, surpassing the baselines of 10.1%-16.3%.\nEspecially on the last two datasets, TwOP and ThreeOP with more complex inter-object relationships or a greater number of objects, Patcher shows a more substantial improvement (31.8% higher than the original prompts and 12.4%-21.9% higher than the three baselines).\nCompared to Promptist, Patcher achieves an CR improvement of 16.3%.\nPromptist automates the addition of modifiers at the end of the input prompts, such as \u201chighly detailed\u201d, \u201cmasterpiece\u201d, or \u201csharp focus\u201d, to enhance the quality of the generated images.\nAdding such modifiers could help the T2I DM focus more on depicting the overall semantics of the prompt.\nHowever, in cases where there are significant feature differences between multiple objects, enhancing the T2I DM\u2019s focus on the entire sentence of the prompt could not effectively narrow the attention difference between different objects.\nIt still requires the addition of appropriate modifiers to objects with weaker features.\nAs for AE, it optimizes the inference process within T2I DMs rather than the prompts, which is supposed to be effective in principle but more difficult for end users to perform.\nHowever, Patcher still achieves superior performance compared to AE, with a CR improvement of 10.1%.\nAs for LR, similar to Patcher, multiple attempts are needed to repair the prompts.\nStatistical analysis shows that LR requires an average of 5.7 attempts to correctly repair an image, whereas Patcher requires only 2.3 attempts.\nAdditionally, Patcher\u2019s CR exceeds LR by 14.2%, demonstrating the effectiveness of feature enhancement.\nThe result also implies that if lacking guidance on feature enhancement, relying solely on the intrinsic capabilities of T2I MDs makes it difficult to effectively improve the accuracy of generated images.\nFor the CLIPScore, the results shows that the improvements are subtle.\nThe reason is that for prompts containing multiple objects, the presence of some objects from the prompt in the generated image can still result in a high CLIPScore.\nTherefore, the similarity difference between correctly generated images and incorrectly generated images with respect to the original prompts is subtle.\nFurthermore, as we illustrated in Section 4.3 ###reference_###, CLIPScore is a weak indicator with which we can not directly infer whether a generated image is correct or not.\nBy comparison, CR together with the manual judgement is more suitable and direct to evaluate whether the catastrophic-neglect issue is mitigated or not.\nIn general, the results demonstrate that Patcher significantly improves CR while maintaining CLIPScore compared to original dataset, demonstrating it\u2019s effectiveness."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.2",
|
| 109 |
+
"parent_section_id": "5",
|
| 110 |
+
"section_name": "Ablation Study",
|
| 111 |
+
"text": "To investigate the effectiveness of the core component in Patcher, we conducted ablation experiments to explore the Correct Rate (CR) after removing Explicit Feature Enhancement (EFE) and Implicit Feature Enhancement (IFE) individually.\nThe results, as shown in Table 3 ###reference_###, show that both EFE and IFE significantly improve CR.\nSpecifically, for datasets containing prompts with two objects, EFE and IFE achieve CRs of 78.8% and 70.7%, respectively, which are 23.9% and 15.8% higher than the CR of the original dataset.\nFor datasets containing prompts with three objects, EFE and IFE achieve CR improvements of 25.4% and 13.8%, respectively, compared to the original dataset.\nThis demonstrates the effectiveness of each component of Patcher.\nMoreover, combining EFE and IFE achieves a higher CR, indicating that the two components complement each other and that their combination can address a broader scope of catastrophic-neglect issue."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "6",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "Related Work",
|
| 117 |
+
"text": ""
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "6.1",
|
| 121 |
+
"parent_section_id": "6",
|
| 122 |
+
"section_name": "Text-to-Image Diffusion Models",
|
| 123 |
+
"text": "In recent years, the diffusion model has emerged as a more advanced and popular framework for text-to-image (T2I) generation compared to traditional non-diffusion methods like Variational Autoencoders (VAEs) Yan et al. (2016 ###reference_b32###); Mansimov et al. (2016 ###reference_b19###) and Generative Adversarial Networks (GANs) Zhu et al. (2019 ###reference_b37###); Ye et al. (2021 ###reference_b33###).\nCompared to GANs and VAEs, diffusion models achieve better results due to their stability during training and ability to progressively refine images, leading to higher quality and more detailed outputs Ho et al. (2020 ###reference_b12###); Nichol and Dhariwal (2021 ###reference_b22###) .\nTo control the generation of diffusion models,\nDhariwal and Nichol (2021 ###reference_b9###) firstly propose a conditional image synthesis method utilizing classifier guidance, achieving great success in text-to-image generation.\nFollowing that, some representative studies Bao et al. (2022 ###reference_b2###); Ramesh et al. (2022b ###reference_b26###); Rombach et al. (2022 ###reference_b27###); Saharia et al. (2022 ###reference_b28###) of text-to-image diffusion models have\nbeen proposed, based on the conditioning mechanism.\nOur experiments are based on Stable Diffusion Rombach et al. (2022 ###reference_b27###) considering its wide applications."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "6.2",
|
| 127 |
+
"parent_section_id": "6",
|
| 128 |
+
"section_name": "Different Issue Types in T2I DM",
|
| 129 |
+
"text": "With the rapid development of T2I DMs, researchers have primarily focused on two main aspects: safety issue and fundamental performance issue Zhai et al. (2023 ###reference_b35###, 2024a ###reference_b34###, 2024b ###reference_b36###); Liu et al. (2024 ###reference_b18###); Borji (2023 ###reference_b4###).\nAs for the issue in fundamental performance,\nBorji (2023 ###reference_b4###) systematically discusses all existing issues in image generation but does not analyze the causes of the catastrophic-neglect issue in T2I models when prompts contain multiple object descriptions. According to the motivation, we discovered that catastrophic-neglect is the most prevalent issue (accounts for 94.0% in Error Rate) when prompts include multiple object descriptions.\nLiu et al. (2023 ###reference_b16###) mentions the issue of object omission, assuming that specific action descriptions cause some objects to be missing in the image. Our proposal addresses object omission caused by inconsistent features among multiple objects, highlighting a different insight.\nSamuel et al. (2024 ###reference_b29###) addresses the issue of text-to-image models generating incorrect objects for rare concepts. It focuses more on single objects, which is not consistent with the issue our approach aims to solve.\nAithal et al. (2024 ###reference_b1###) discusses the hallucination issue, where text-to-image generated images contain samples that have never existed in the training set. This type of hallucination issue is not related to the catastrophic-neglect issue we are addressing."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "6.3",
|
| 133 |
+
"parent_section_id": "6",
|
| 134 |
+
"section_name": "Optimizations For T2I DM",
|
| 135 |
+
"text": "Some research efforts have focused on optimizing the inference process of T2I DMs. For instance, Liu et al. (2022 ###reference_b15###); Feng et al. (2023 ###reference_b10###); Chefer et al. (2023 ###reference_b8###) have worked on improving the guidance mechanism through cross-attention, enabling T2I DMs to better focus on each object and attribute within the prompts, which helps in generating more accurate images.\nAdditionally, there are works focusing on hand-crafted guidelines for prompt optimization. These studies involve selecting and composing prompts to generate images that achieve a distinct visual style and high quality Liu and Chilton (2022 ###reference_b17###); Oppenlaender (2022 ###reference_b23###).\nSuch approaches often rely on manual intervention and expert knowledge.\nTo automate the construction of optimized prompts, Hao et al. (2023 ###reference_b11###) propose an approach that combines supervised learning and reinforcement learning to train a prompt optimization model. The optimized prompts generated by this model are able to produce more aesthetically pleasing images and better adhere to the semantic content of the prompts.\nJust as large language models exhibit biases in their understanding of different words Li et al. (2024 ###reference_b13###), T2I DMs face similar issues. This leads to the issue of unbalanced object characteristics when describing multiple objects.\nIn this study, we focus on repairing catastrophic-neglect in T2I DMs by optimizing at the prompt level."
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "7",
|
| 139 |
+
"parent_section_id": null,
|
| 140 |
+
"section_name": "Conclusion",
|
| 141 |
+
"text": "This paper proposes an approach (Patcher) to repair catastrophic-neglect in Text-to-Image Diffusion Models by attention-guided features enhancement of neglected objects in the generated images.\nPatcher first inputs the prompt into a T2I DM and an attention explainability tool to obtain the generated image and the attention scores for each token.\nIt then checks whether all objects in the prompt appear in the generated image based on the text-image similarity.\nIf any objects are neglected, Patcher iteratively searches for suitable explicit and implicit features to enhance the neglected objects based on the attention differences between the objects.\nExperimental results demonstrate that Patcher effectively addresses the issue of catastrophic-neglect in T2I DMs, achieving a 10.1%-16.3% higher Correct Rate based on manual annotation compared to baselines, as tested on Stable-Diffusion V1.4, V1.5, and V2.1 models.\nAdditionally, ablation experiments show that both explicit feature enhancing and implicit feature enhancing in Patcher contribute to resolving the issue of catastrophic-neglect in T2I DMs."
|
| 142 |
+
}
|
| 143 |
+
],
|
| 144 |
+
"appendix": [
|
| 145 |
+
{
|
| 146 |
+
"section_id": "Appendix 1",
|
| 147 |
+
"parent_section_id": null,
|
| 148 |
+
"section_name": "Appendix A Details of Explicit Feature Prompts",
|
| 149 |
+
"text": "The details of the explicit feature prompts are illustrated below.\nIn Patcher, we replace the placeholders in the following prompts with the neglected objects."
|
| 150 |
+
},
|
| 151 |
+
{
|
| 152 |
+
"section_id": "Appendix 2",
|
| 153 |
+
"parent_section_id": null,
|
| 154 |
+
"section_name": "Appendix B Details of The Prompt in LLM-Repair",
|
| 155 |
+
"text": "The details of the prompt in LLM-Repair is illustrated below. In Patcher, we replace the placeholders in the following prompts with the input prompt and the neglected object."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"section_id": "Appendix 3",
|
| 159 |
+
"parent_section_id": null,
|
| 160 |
+
"section_name": "Appendix C Examples of Images Generated by Original Prompts and Repaired Prompts Derived from Patcher",
|
| 161 |
+
"text": "Examples of the images generated from the original prompt and the repaired prompt generated by Patcher are shown in the figure 3 ###reference_###.\n###figure_3###"
|
| 162 |
+
}
|
| 163 |
+
],
|
| 164 |
+
"tables": {
|
| 165 |
+
"1": {
|
| 166 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>The attention difference between multiple objects before and after using explicit and implicit features.\n\u2018Correct\u2019 and \u2018Wrong\u2019 respectively indicates the results of the newly generated images after adding the features.\n</figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T1.1\" style=\"width:208.1pt;height:56pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-29.6pt,8.0pt) scale(0.778319060748261,0.778319060748261) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S2.T1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.1.1\">Strategy</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"3\" id=\"S2.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.2.1\">Correct</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S2.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.3.1\">Wrong</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.2.1\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T1.1.1.2.1.1\">Before</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.2.2\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T1.1.1.2.2.1\">After</span></td>\n<td class=\"ltx_td\" id=\"S2.T1.1.1.2.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.2.4\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T1.1.1.2.4.1\">Before</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.2.5\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T1.1.1.2.5.1\">After</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S2.T1.1.1.3.1.1\">Explicit Feature</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.3.2\">658</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.3.3\">232</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.1.1.3.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.3.5\">808</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.3.6\">887</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S2.T1.1.1.4.1.1\">Implicit Feature</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.1.1.4.2\">934</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.1.1.4.3\">437</td>\n<td class=\"ltx_td ltx_border_bb ltx_border_t\" id=\"S2.T1.1.1.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.1.1.4.5\">713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S2.T1.1.1.4.6\">1442</td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 167 |
+
"capture": "Table 1: The attention difference between multiple objects before and after using explicit and implicit features.\n\u2018Correct\u2019 and \u2018Wrong\u2019 respectively indicates the results of the newly generated images after adding the features.\n"
|
| 168 |
+
},
|
| 169 |
+
"2": {
|
| 170 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The Correct Rate (CR) and the ClIPScore of the original prompts, <span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.2.1\">Patcher</span> and baselines.</figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T2.3\" style=\"width:216.8pt;height:178.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-99.4pt,81.8pt) scale(0.521629940568697,0.521629940568697) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T2.3.1\">\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.3.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.1.1.1\">Dataset</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.3.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.1.2.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.3.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.1.3.1\">Metric</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.3.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.1.4.1\">Original</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.3.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.1.5.1\">LR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.3.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.1.6.1\">Promptist</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.3.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.1.7.1\">AE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.3.1.1.8\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S5.T2.3.1.1.8.1\">Patcher</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.2.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.2.1.1\">TBP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.2.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.2.2.1\">SD V1.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.1.2.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.2.3.1\">CR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.2.4\">61.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.2.5\">75.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.2.6\">78.9%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.2.7\">83.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.2.8.1\">89.8%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.1.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.3.1.1\">CLIPScore</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.3.2\">32.0%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.3.3\">32.2%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.3.4\">32.2%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.3.5\">32.6%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.3.6.1\">32.7%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.4.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.4.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.4.2.1\">SD V1.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.1.4.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.4.3.1\">CR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.4.4\">55.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.4.5\">76.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.4.6\">78.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.4.7\">79.3%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.4.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.4.8.1\">88.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.5\">\n<td class=\"ltx_td\" id=\"S5.T2.3.1.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.1.5.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.5.2.1\">CLIPScore</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.5.3\">31.8%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.5.4\">32.0%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.5.5\">32.1%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.5.6.1\">32.3%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.5.7.1\">32.3%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.6.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.6.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.6.2.1\">SD V2.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.1.6.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.6.3.1\">CR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.6.4\">72.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.6.5\">84.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.6.6\">81.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.6.7\">85.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.6.8.1\">96.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.7\">\n<td class=\"ltx_td\" id=\"S5.T2.3.1.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.1.7.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.7.2.1\">CLIPScore</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.7.3\">32.8%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.7.4\">33.0%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.7.5\">32.7%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.7.6\">33.2%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.7.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.7.7.1\">33.4%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.8.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.8.1.1\">TwOP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.8.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.8.2.1\">SD V1.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.1.8.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.8.3.1\">CR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.8.4\">45.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.8.5\">63.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.8.6\">53.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.8.7\">63.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.8.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.8.8.1\">77.8%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.1.9.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.9.1.1\">CLIPScore</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.9.2\">30.3%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.9.3\">30.5%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.9.4\">29.5%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.9.5\">30.6%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.9.6.1\">30.7%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.10.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.10.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.10.2.1\">SD V1.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.1.10.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.10.3.1\">CR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.10.4\">45.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.10.5\">67.9%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.10.6\">56.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.10.7\">68.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.10.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.10.8.1\">78.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.11\">\n<td class=\"ltx_td\" id=\"S5.T2.3.1.11.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.1.11.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.11.2.1\">CLIPScore</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.11.3\">30.1%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.11.4\">30.6%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.11.5\">29.4%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.11.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.11.6.1\">30.7%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.11.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.11.7.1\">30.7%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.12.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.12.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.12.2.1\">SD V2.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.1.12.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.12.3.1\">CR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.12.4\">49.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.12.5\">69.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.12.6\">63.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.12.7\">69.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.12.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.12.8.1\">80.2%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.13\">\n<td class=\"ltx_td\" id=\"S5.T2.3.1.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.1.13.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.13.2.1\">CLIPScore</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.13.3\">30.5%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.13.4\">30.7%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.13.5\">30.1%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.13.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.13.6.1\">30.9%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.13.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.13.7.1\">30.9%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.14.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.14.1.1\">ThreeOP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.14.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.14.2.1\">SD V1.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.1.14.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.14.3.1\">CR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.14.4\">12.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.14.5\">28.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.14.6\">32.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.14.7\">29.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.14.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.14.8.1\">41.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.1.15.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.15.1.1\">CLIPScore</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.15.2\">31.2%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.15.3\">31.3%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.15.4\">29.7%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.15.5\">31.6%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.15.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.15.6.1\">31.7%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.16.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.16.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.16.2.1\">SD V1.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.1.16.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.16.3.1\">CR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.16.4\">13.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.16.5\">28.9%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.16.6\">32.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.16.7\">32.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.16.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.16.8.1\">46.4%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.17\">\n<td class=\"ltx_td\" id=\"S5.T2.3.1.17.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.3.1.17.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.17.2.1\">CLIPScore</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.17.3\">31.2%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.17.4\">31.3%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.17.5\">30.2%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.17.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.17.6.1\">31.6%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.17.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.17.7.1\">31.6%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.18\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.1.18.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T2.3.1.18.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.18.2.1\">SD V2.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.1.18.3\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.18.3.1\">CR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.18.4\">14.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.18.5\">30.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.18.6\">33.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.18.7\">34.3%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.1.18.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.18.8.1\">48.2%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.19\">\n<td class=\"ltx_td ltx_border_bb\" id=\"S5.T2.3.1.19.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T2.3.1.19.2\"><span class=\"ltx_text ltx_font_italic\" id=\"S5.T2.3.1.19.2.1\">CLIPScore</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.3.1.19.3\">31.3%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.3.1.19.4\">31.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.3.1.19.5\">30.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.3.1.19.6\">31.7%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.3.1.19.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.1.19.7.1\">31.8%</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 171 |
+
"capture": "Table 2: The Correct Rate (CR) and the ClIPScore of the original prompts, Patcher and baselines."
|
| 172 |
+
},
|
| 173 |
+
"3": {
|
| 174 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>The Correct Rate of the original prompts, Explicit Feature Enhancing (EFE), Implicit Feature Enhancing (IFE) and <span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.2.1\">Patcher</span>.</figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T3.3\" style=\"width:216.8pt;height:131.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-40.1pt,24.3pt) scale(0.730135559180205,0.730135559180205) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T3.3.1\">\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.3.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.1.1.1\">Dataset</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.3.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.1.2.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.3.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.1.3.1\">Original</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.3.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.1.4.1\">EFE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.3.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.1.5.1\">IFE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.3.1.1.6\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S5.T3.3.1.1.6.1\">Patcher</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.2.1\"><span class=\"ltx_text\" id=\"S5.T3.3.1.2.1.1\">TBP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.1.2.2\"><span class=\"ltx_text\" id=\"S5.T3.3.1.2.2.1\">SD V1.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.2.3\">61.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.2.4\">82.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.2.5\">79.3%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.2.6.1\">89.8%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.3.1.3.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.1.3.2\"><span class=\"ltx_text\" id=\"S5.T3.3.1.3.2.1\">SD V1.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.3.3\">55.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.3.4\">81.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.3.5\">74.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.3.6.1\">88.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.3.1.4.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.1.4.2\"><span class=\"ltx_text\" id=\"S5.T3.3.1.4.2.1\">SD V2.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.4.3\">72.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.4.4\">90.1%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.4.5\">83.7%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.4.6.1\">96.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.5.1\"><span class=\"ltx_text\" id=\"S5.T3.3.1.5.1.1\">TwOP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.1.5.2\"><span class=\"ltx_text\" id=\"S5.T3.3.1.5.2.1\">SD V1.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.5.3\">45.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.5.4\">73.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.5.5\">59.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.5.6.1\">77.8%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.3.1.6.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.1.6.2\"><span class=\"ltx_text\" id=\"S5.T3.3.1.6.2.1\">SD V1.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.6.3\">45.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.6.4\">70.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.6.5\">61.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.6.6.1\">78.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.3.1.7.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.1.7.2\"><span class=\"ltx_text\" id=\"S5.T3.3.1.7.2.1\">SD V2.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.7.3\">49.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.7.4\">75.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.7.5\">66.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.7.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.7.6.1\">80.2%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.8.1\"><span class=\"ltx_text\" id=\"S5.T3.3.1.8.1.1\">ThreeOP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.1.8.2\"><span class=\"ltx_text\" id=\"S5.T3.3.1.8.2.1\">SD V1.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.8.3\">12.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.8.4\">34.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.8.5\">24.6%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.8.6.1\">41.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.3.1.9.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.1.9.2\"><span class=\"ltx_text\" id=\"S5.T3.3.1.9.2.1\">SD V1.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.9.3\">13.4%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.9.4\">40.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.9.5\">27.2%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.1.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.9.6.1\">46.4%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.1.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.3.1.10.1\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:11.2pt;background:black;display:inline-block;\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T3.3.1.10.2\"><span class=\"ltx_text\" id=\"S5.T3.3.1.10.2.1\">SD V2.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.3.1.10.3\">14.0%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.3.1.10.4\">41.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.3.1.10.5\">29.5%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.3.1.10.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.3.1.10.6.1\">48.2%</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 175 |
+
"capture": "Table 3: The Correct Rate of the original prompts, Explicit Feature Enhancing (EFE), Implicit Feature Enhancing (IFE) and Patcher."
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
"image_paths": {
|
| 179 |
+
"1": {
|
| 180 |
+
"figure_path": "2406.16272v2_figure_1.png",
|
| 181 |
+
"caption": "Figure 1: \nExamples of catastrophic neglect in the generated images by T2I DMs, and the enhancement of explicit and implicit features.",
|
| 182 |
+
"url": "http://arxiv.org/html/2406.16272v2/extracted/5869587/fig/Enhance_feature.png"
|
| 183 |
+
},
|
| 184 |
+
"2": {
|
| 185 |
+
"figure_path": "2406.16272v2_figure_2.png",
|
| 186 |
+
"caption": "Figure 2: \nThe overview of Patcher. The procedure in the dashed box is executed only the first time.",
|
| 187 |
+
"url": "http://arxiv.org/html/2406.16272v2/extracted/5869587/fig/new_method.png"
|
| 188 |
+
},
|
| 189 |
+
"3": {
|
| 190 |
+
"figure_path": "2406.16272v2_figure_3.png",
|
| 191 |
+
"caption": "Figure 3: \nImages generated by original prompts and repaired prompts.",
|
| 192 |
+
"url": "http://arxiv.org/html/2406.16272v2/extracted/5869587/fig/appendix.png"
|
| 193 |
+
}
|
| 194 |
+
},
|
| 195 |
+
"validation": true,
|
| 196 |
+
"references": [
|
| 197 |
+
{
|
| 198 |
+
"1": {
|
| 199 |
+
"title": "Understanding hallucinations in diffusion models through mode interpolation.",
|
| 200 |
+
"author": "Sumukh K Aithal, Pratyush Maini, Zachary C Lipton, and J Zico Kolter. 2024.",
|
| 201 |
+
"venue": "arXiv preprint arXiv:2406.09358.",
|
| 202 |
+
"url": null
|
| 203 |
+
}
|
| 204 |
+
},
|
| 205 |
+
{
|
| 206 |
+
"2": {
|
| 207 |
+
"title": "All are worth words: a vit backbone for score-based diffusion models.",
|
| 208 |
+
"author": "Fan Bao, Chongxuan Li, Yue Cao, and Jun Zhu. 2022.",
|
| 209 |
+
"venue": "CoRR, abs/2209.12152.",
|
| 210 |
+
"url": null
|
| 211 |
+
}
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"3": {
|
| 215 |
+
"title": "Natural Language Processing with Python.",
|
| 216 |
+
"author": "Steven Bird, Ewan Klein, and Edward Loper. 2009.",
|
| 217 |
+
"venue": "O\u2019Reilly.",
|
| 218 |
+
"url": null
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"4": {
|
| 223 |
+
"title": "Qualitative failures of image generation models and their application in detecting deepfakes.",
|
| 224 |
+
"author": "Ali Borji. 2023.",
|
| 225 |
+
"venue": "Image and Vision Computing, 137:104771.",
|
| 226 |
+
"url": null
|
| 227 |
+
}
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"5": {
|
| 231 |
+
"title": "Language models are few-shot learners.",
|
| 232 |
+
"author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020.",
|
| 233 |
+
"venue": "Advances in neural information processing systems, 33:1877\u20131901.",
|
| 234 |
+
"url": null
|
| 235 |
+
}
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"6": {
|
| 239 |
+
"title": "A survey on evaluation of large language models.",
|
| 240 |
+
"author": "Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2024.",
|
| 241 |
+
"venue": "ACM Transactions on Intelligent Systems and Technology, 15(3):1\u201345.",
|
| 242 |
+
"url": null
|
| 243 |
+
}
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"7": {
|
| 247 |
+
"title": "Jailbreaking black box large language models in twenty queries.",
|
| 248 |
+
"author": "Patrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, and Eric Wong. 2023.",
|
| 249 |
+
"venue": "CoRR, abs/2310.08419.",
|
| 250 |
+
"url": null
|
| 251 |
+
}
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"8": {
|
| 255 |
+
"title": "Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models.",
|
| 256 |
+
"author": "Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. 2023.",
|
| 257 |
+
"venue": "ACM Trans. Graph., 42(4):148:1\u2013148:10.",
|
| 258 |
+
"url": null
|
| 259 |
+
}
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"9": {
|
| 263 |
+
"title": "Diffusion models beat gans on image synthesis.",
|
| 264 |
+
"author": "Prafulla Dhariwal and Alexander Quinn Nichol. 2021.",
|
| 265 |
+
"venue": "In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 8780\u20138794.",
|
| 266 |
+
"url": null
|
| 267 |
+
}
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"10": {
|
| 271 |
+
"title": "Training-free structured diffusion guidance for compositional text-to-image synthesis.",
|
| 272 |
+
"author": "Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun R. Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, and William Yang Wang. 2023.",
|
| 273 |
+
"venue": "In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.",
|
| 274 |
+
"url": null
|
| 275 |
+
}
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"11": {
|
| 279 |
+
"title": "Optimizing prompts for text-to-image generation.",
|
| 280 |
+
"author": "Yaru Hao, Zewen Chi, Li Dong, and Furu Wei. 2023.",
|
| 281 |
+
"venue": "In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023.",
|
| 282 |
+
"url": null
|
| 283 |
+
}
|
| 284 |
+
},
|
| 285 |
+
{
|
| 286 |
+
"12": {
|
| 287 |
+
"title": "Denoising diffusion probabilistic models.",
|
| 288 |
+
"author": "Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020.",
|
| 289 |
+
"venue": "In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.",
|
| 290 |
+
"url": null
|
| 291 |
+
}
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"13": {
|
| 295 |
+
"title": "Glitch tokens in large language models: categorization taxonomy and effective detection.",
|
| 296 |
+
"author": "Yuxi Li, Yi Liu, Gelei Deng, Ying Zhang, Wenjia Song, Ling Shi, Kailong Wang, Yuekang Li, Yang Liu, and Haoyu Wang. 2024.",
|
| 297 |
+
"venue": "Proceedings of the ACM on Software Engineering, 1(FSE):2075\u20132097.",
|
| 298 |
+
"url": null
|
| 299 |
+
}
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"14": {
|
| 303 |
+
"title": "Microsoft COCO: common objects in context.",
|
| 304 |
+
"author": "Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2014.",
|
| 305 |
+
"venue": "In Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, volume 8693 of Lecture Notes in Computer Science, pages 740\u2013755. Springer.",
|
| 306 |
+
"url": null
|
| 307 |
+
}
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"15": {
|
| 311 |
+
"title": "Compositional visual generation with composable diffusion models.",
|
| 312 |
+
"author": "Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B. Tenenbaum. 2022.",
|
| 313 |
+
"venue": "In Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XVII, volume 13677 of Lecture Notes in Computer Science, pages 423\u2013439. Springer.",
|
| 314 |
+
"url": null
|
| 315 |
+
}
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"16": {
|
| 319 |
+
"title": "Discovering failure modes of text-guided diffusion models via adversarial search.",
|
| 320 |
+
"author": "Qihao Liu, Adam Kortylewski, Yutong Bai, Song Bai, and Alan Yuille. 2023.",
|
| 321 |
+
"venue": null,
|
| 322 |
+
"url": "http://arxiv.org/abs/2306.00974"
|
| 323 |
+
}
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"17": {
|
| 327 |
+
"title": "Design guidelines for prompt engineering text-to-image generative models.",
|
| 328 |
+
"author": "Vivian Liu and Lydia B. Chilton. 2022.",
|
| 329 |
+
"venue": "In CHI \u201922: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022 - 5 May 2022, pages 384:1\u2013384:23. ACM.",
|
| 330 |
+
"url": null
|
| 331 |
+
}
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"18": {
|
| 335 |
+
"title": "Groot: Adversarial testing for generative text-to-image models with tree-based semantic transformation.",
|
| 336 |
+
"author": "Yi Liu, Guowei Yang, Gelei Deng, Feiyue Chen, Yuqi Chen, Ling Shi, Tianwei Zhang, and Yang Liu. 2024.",
|
| 337 |
+
"venue": "arXiv preprint arXiv:2402.12100.",
|
| 338 |
+
"url": null
|
| 339 |
+
}
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"19": {
|
| 343 |
+
"title": "Generating images from captions with attention.",
|
| 344 |
+
"author": "Elman Mansimov, Emilio Parisotto, Lei Jimmy Ba, and Ruslan Salakhutdinov. 2016.",
|
| 345 |
+
"venue": "In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.",
|
| 346 |
+
"url": null
|
| 347 |
+
}
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"20": {
|
| 351 |
+
"title": "Tree of attacks: Jailbreaking black-box llms automatically.",
|
| 352 |
+
"author": "Anay Mehrotra, Manolis Zampetakis, Paul Kassianik, Blaine Nelson, Hyrum S. Anderson, Yaron Singer, and Amin Karbasi. 2023.",
|
| 353 |
+
"venue": "CoRR, abs/2312.02119.",
|
| 354 |
+
"url": null
|
| 355 |
+
}
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"21": {
|
| 359 |
+
"title": "Wordnet: A lexical database for english.",
|
| 360 |
+
"author": "George A. Miller. 1995.",
|
| 361 |
+
"venue": "Commun. ACM, 38(11):39\u201341.",
|
| 362 |
+
"url": null
|
| 363 |
+
}
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"22": {
|
| 367 |
+
"title": "Improved denoising diffusion probabilistic models.",
|
| 368 |
+
"author": "Alexander Quinn Nichol and Prafulla Dhariwal. 2021.",
|
| 369 |
+
"venue": "In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 8162\u20138171. PMLR.",
|
| 370 |
+
"url": null
|
| 371 |
+
}
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"23": {
|
| 375 |
+
"title": "A taxonomy of prompt modifiers for text-to-image generation. arxiv.",
|
| 376 |
+
"author": "Jonas Oppenlaender. 2022.",
|
| 377 |
+
"venue": "arXiv preprint arXiv:2204.13988.",
|
| 378 |
+
"url": null
|
| 379 |
+
}
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"24": {
|
| 383 |
+
"title": "Learning transferable visual models from natural language supervision.",
|
| 384 |
+
"author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021.",
|
| 385 |
+
"venue": "In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 8748\u20138763. PMLR.",
|
| 386 |
+
"url": null
|
| 387 |
+
}
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"25": {
|
| 391 |
+
"title": "Hierarchical text-conditional image generation with CLIP latents.",
|
| 392 |
+
"author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022a.",
|
| 393 |
+
"venue": "CoRR, abs/2204.06125.",
|
| 394 |
+
"url": null
|
| 395 |
+
}
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"26": {
|
| 399 |
+
"title": "Hierarchical text-conditional image generation with CLIP latents.",
|
| 400 |
+
"author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022b.",
|
| 401 |
+
"venue": "CoRR, abs/2204.06125.",
|
| 402 |
+
"url": null
|
| 403 |
+
}
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"27": {
|
| 407 |
+
"title": "High-resolution image synthesis with latent diffusion models.",
|
| 408 |
+
"author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer. 2022.",
|
| 409 |
+
"venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 10674\u201310685. IEEE.",
|
| 410 |
+
"url": null
|
| 411 |
+
}
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"28": {
|
| 415 |
+
"title": "Photorealistic text-to-image diffusion models with deep language understanding.",
|
| 416 |
+
"author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L. Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. 2022.",
|
| 417 |
+
"venue": "In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.",
|
| 418 |
+
"url": null
|
| 419 |
+
}
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"29": {
|
| 423 |
+
"title": "Generating images of rare concepts using pre-trained diffusion models.",
|
| 424 |
+
"author": "Dvir Samuel, Rami Ben-Ari, Simon Raviv, Nir Darshan, and Gal Chechik. 2024.",
|
| 425 |
+
"venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 4695\u20134703.",
|
| 426 |
+
"url": null
|
| 427 |
+
}
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"30": {
|
| 431 |
+
"title": "What the DAAM: interpreting stable diffusion using cross attention.",
|
| 432 |
+
"author": "Raphael Tang, Linqing Liu, Akshat Pandey, Zhiying Jiang, Gefei Yang, Karun Kumar, Pontus Stenetorp, Jimmy Lin, and Ferhan Ture. 2023.",
|
| 433 |
+
"venue": "In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 5644\u20135659. Association for Computational Linguistics.",
|
| 434 |
+
"url": null
|
| 435 |
+
}
|
| 436 |
+
},
|
| 437 |
+
{
|
| 438 |
+
"31": {
|
| 439 |
+
"title": "Attention is all you need.",
|
| 440 |
+
"author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017.",
|
| 441 |
+
"venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998\u20136008.",
|
| 442 |
+
"url": null
|
| 443 |
+
}
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"32": {
|
| 447 |
+
"title": "Attribute2image: Conditional image generation from visual attributes.",
|
| 448 |
+
"author": "Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. 2016.",
|
| 449 |
+
"venue": "In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV, volume 9908 of Lecture Notes in Computer Science, pages 776\u2013791. Springer.",
|
| 450 |
+
"url": null
|
| 451 |
+
}
|
| 452 |
+
},
|
| 453 |
+
{
|
| 454 |
+
"33": {
|
| 455 |
+
"title": "Improving text-to-image synthesis using contrastive learning.",
|
| 456 |
+
"author": "Hui Ye, Xiulong Yang, Martin Tak\u00e1c, Rajshekhar Sunderraman, and Shihao Ji. 2021.",
|
| 457 |
+
"venue": "In 32nd British Machine Vision Conference 2021, BMVC 2021, Online, November 22-25, 2021, page 154. BMVA Press.",
|
| 458 |
+
"url": null
|
| 459 |
+
}
|
| 460 |
+
},
|
| 461 |
+
{
|
| 462 |
+
"34": {
|
| 463 |
+
"title": "Membership inference on text-to-image diffusion models via conditional likelihood discrepancy.",
|
| 464 |
+
"author": "Shengfang Zhai, Huanran Chen, Yinpeng Dong, Jiajun Li, Qingni Shen, Yansong Gao, Hang Su, and Yang Liu. 2024a.",
|
| 465 |
+
"venue": "arXiv preprint arXiv:2405.14800.",
|
| 466 |
+
"url": null
|
| 467 |
+
}
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"35": {
|
| 471 |
+
"title": "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning.",
|
| 472 |
+
"author": "Shengfang Zhai, Yinpeng Dong, Qingni Shen, Shi Pu, Yuejian Fang, and Hang Su. 2023.",
|
| 473 |
+
"venue": "In Proceedings of the 31st ACM International Conference on Multimedia, pages 1577\u20131587.",
|
| 474 |
+
"url": null
|
| 475 |
+
}
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"36": {
|
| 479 |
+
"title": "Discovering universal semantic triggers for text-to-image synthesis.",
|
| 480 |
+
"author": "Shengfang Zhai, Weilong Wang, Jiajun Li, Yinpeng Dong, Hang Su, and Qingni Shen. 2024b.",
|
| 481 |
+
"venue": "arXiv preprint arXiv:2402.07562.",
|
| 482 |
+
"url": null
|
| 483 |
+
}
|
| 484 |
+
},
|
| 485 |
+
{
|
| 486 |
+
"37": {
|
| 487 |
+
"title": "DM-GAN: dynamic memory generative adversarial networks for text-to-image synthesis.",
|
| 488 |
+
"author": "Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang. 2019.",
|
| 489 |
+
"venue": "In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 5802\u20135810. Computer Vision Foundation / IEEE.",
|
| 490 |
+
"url": null
|
| 491 |
+
}
|
| 492 |
+
}
|
| 493 |
+
],
|
| 494 |
+
"url": "http://arxiv.org/html/2406.16272v2"
|
| 495 |
+
}
|
20240921/2407.04440v2.json
ADDED
|
@@ -0,0 +1,230 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Spatiotemporal Forecasting of Traffic Flow using Wavelet-based Temporal Attention",
|
| 3 |
+
"abstract": "Spatiotemporal forecasting of traffic flow data represents a typical problem in the field of machine learning, impacting urban traffic management systems. In general, spatiotemporal forecasting problems involve complex interactions, nonlinearities, and long-range dependencies due to the interwoven nature of the temporal and spatial dimensions. Due to this, traditional statistical and machine learning methods cannot adequately handle the temporal and spatial dependencies in these complex traffic flow datasets. A prevalent approach in the field combines graph convolutional networks and multi-head attention mechanisms for spatiotemporal processing. This paper proposes a wavelet-based temporal attention model, namely a wavelet-based dynamic spatiotemporal aware graph neural network (W-DSTAGNN), for tackling the traffic forecasting problem. Wavelet decomposition can help by decomposing the signal into components that can be analyzed independently, reducing the impact of non-stationarity and handling long-range dependencies of traffic flow datasets. Benchmark experiments using three popularly used statistical metrics confirm that our proposal efficiently captures spatiotemporal correlations and outperforms ten state-of-the-art models (including both temporal and spatiotemporal benchmarks) on three publicly available traffic datasets. Our proposed ensemble method can better handle dynamic temporal and spatial dependencies and make reliable long-term forecasts. In addition to point forecasts, our proposed model can generate interval forecasts that significantly enhance probabilistic forecasting for traffic datasets.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Rapid urbanization and population growth contribute to severe traffic congestion, which negatively affects both traffic safety and environmental conditions [1 ###reference_b1###]. To address these challenges and mitigate congestion, urban planners are increasingly implementing intelligent transportation systems (ITS) in growing cities and metropolitan areas. Recent advancements in sensing technologies, along with the widespread deployment of ground-based sensors on roads and subways, enable the detection of real-time traffic conditions. The resulting large-scale traffic data collection facilitates the design of early intervention strategies through traffic forecasting [2 ###reference_b2###]. These strategies help traffic controllers enhance the efficiency of transportation systems and reduce congestion-related issues.\nAccurate traffic flow forecasting, a key component of ITS, has emerged as a prominent research area [3 ###reference_b3###]. The development of models capable of predicting and preventing traffic congestion, optimizing traffic regulation, and identifying optimal travel routes is essential for the success of ITS in urban environments. Recently, data-driven traffic flow forecasting methods have gained significant attention, largely due to the availability of real-world datasets like the Performance Measurement System (PeMS) dataset, which is collected from individual sensors deployed across major metropolitan areas of California by California Transportation Agencies (CalTrans).\nPrevious studies in this domain have primarily focused on forecasting traffic flow by extrapolating historical traffic patterns. Classical time series analysis techniques, ranging from autoregressive integrated moving average (ARIMA) models to multivariate vector autoregression (VAR) models, have been utilized for traffic forecasting tasks [4 ###reference_b4###, 5 ###reference_b5###]. However, these models struggle to accurately capture the complexities of non-stationary time sequences in traffic data. More recently, machine learning approaches like support vector regression (SVR) have been applied to address these challenges [6 ###reference_b6###]. Despite their benefits, deep learning frameworks\u2014particularly those incorporating attention mechanisms or convolutional layers\u2014have shown superior performance, as they automate preprocessing and can better handle the intricacies of traffic flow data. This has led to the widespread adoption of deep learning architectures in traffic forecasting [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###].\nAlthough deep learning-based temporal architectures have shown promising results, they often fail to capture the spatial dependencies inherent in traffic data effectively [12 ###reference_b12###, 13 ###reference_b13###]. While temporal models primarily focus on learning the historical characteristics to forecast future dynamics, a detailed analysis reveals that both temporal and spatial patterns influence traffic flow [14 ###reference_b14###]. For instance, traffic flow is affected by the time of the day we are considering - traffic can be high near residential areas during evening office closure times or near a school during the school opening times. Traffic can also be affected by nearby traffic conditions - if a road has very low traffic (perhaps due to some construction work), the traffic might be higher on nearby roads [15 ###reference_b15###]. Moreover, all these patterns can change depending upon the day - traffic near schools and offices will be much less on weekends, while traffic near malls and shopping complexes will be higher on weekends, and vice-versa. These complex patterns make traffic flow prediction a very complicated task [16 ###reference_b16###]. To model these dynamic changes in the traffic flow datasets, researchers have focused on designing the problem as a spatiotemporal forecasting setup [17 ###reference_b17###]. In recent years, graph-based deep learning models, particularly graph neural networks (GNN), have emerged as a powerful tool for handling spatiotemporal datasets in various domains ranging from sensor networks [18 ###reference_b18###, 19 ###reference_b19###] and climate modeling [20 ###reference_b20###, 21 ###reference_b21###] to traffic control systems [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###]. Variants of GNN frameworks have achieved state-of-the-art performance in traffic forecasting problems due to their ability to capture spatial dependencies using non-Euclidean graph structures. For instance, an encoder-decoder architecture (both with spatial and temporal attention) along with a transform attention layer between the encoder and decoder, namely graph multi-attention network (GMAN), has been proposed [25 ###reference_b25###]. Dynamic spatiotemporal aware graph neural network (DSTAGNN) [23 ###reference_b23###] has been one of the recent state-of-the-art models for traffic forecasting due to its ability to handle the high-dimensional dynamic spatiotemporal nature of the traffic flow datasets. However, it fails to generate long-term forecasts and capture the seasonal patterns in the temporal structures of the PeMS traffic flow data [26 ###reference_b26###].\nAlongside the forecasting technique, several decomposition techniques, such as Fourier transforms [27 ###reference_b27###], Fast Fourier transforms [28 ###reference_b28###], and Wavelet decomposition [29 ###reference_b29###], have shown competencies in time series pre-processing tasks, among many others. A recent study by Sasal et al. has shown that when a wavelet-transformed sequence is fed into a transformer (and then inverse-transformed after forecasting), it improves the performance of the transformer for temporal forecasting of long-sequence data [30 ###reference_b30###]. To overcome the issues with DSTAGNN and other spatiotemporal forecasting models for traffic data, we introduce wavelet-based temporal attention that can effectively model temporal dynamics and spatial patterns of PeMS datasets. Our proposed wavelet-based dynamic spatiotemporal aware graph neural network (W-DSTAGNN) method can simultaneously handle the non-stationarity and nonlinear structure of the spatiotemporal data and can generate long-term forecasts for traffic conditions. In addition, our proposal, combined with conformal prediction, can generate prediction intervals for probabilistic forecasting of traffic flow data, which will be of immense use for traffic management systems. Our contributions can be summarized as follows:\nWe propose a novel framework that combines maximal overlapping discrete wavelet transformation (MODWT) and the temporal attention module as W-DSTAGNN for learning long-term temporal and spatial dependencies of traffic conditions.\nThe proposed W-DSTAGNN captures the nonlinearity, non-stationarity, and complicated relations between the nodes in a better way than the standard traffic forecasting models. This is confirmed by large-scale experiments with three datasets and eleven baselines.\nMultiple comparisons with the best (MCB) test are performed to show that our model indeed performs better than the baselines. We also presented a conformal prediction plot to give further evidence for the competence of our method in generating prediction intervals.\nThe rest of the paper is organized as follows: Section II ###reference_### reviews various time series forecasting approaches designed for traffic forecasting tasks. Section III ###reference_### briefly describes the features of wavelet decomposition along with its mathematical formulation. Section IV ###reference_### introduces the proposed W-DSTAGNN approach for spatiotemporal forecasting. Section V ###reference_### highlights the efficiency of the proposal over baseline forecasters using extensive experiments with three real-world datasets and statistical significance tests. Section VI ###reference_### highlights the drawbacks and limitations of our method along with the scope for future improvements. Finally, Section VII ###reference_### concludes the paper."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related Work",
|
| 15 |
+
"text": "With the number of vehicles on the road increasing every year, it is no surprise that current traffic management systems need to become even more effective. Traffic flow prediction plays a pivotal role in these systems [16 ###reference_b16###]; however, accurately predicting traffic flow remains a challenging task. In the past, practitioners and researchers have utilized historical traffic data and explored forecasting approaches from various paradigms to analyze traffic conditions [2 ###reference_b2###, 31 ###reference_b31###, 32 ###reference_b32###]. These studies have focused on modeling both temporal and spatiotemporal dependencies better to understand the complex dynamics within traffic flow datasets. In this section, we provide a brief overview of the various approaches adopted in the literature to address traffic forecasting challenges."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Temporal forecasting approaches",
|
| 21 |
+
"text": "Traditional time series forecasting models have been a popular choice among practitioners for modeling traffic flow datasets [33 ###reference_b33###]. These frameworks typically extrapolate the historical patterns from stationary traffic flow series to predict its future dynamics [34 ###reference_b34###]. Among these, popular architectures include the linear ARIMA model and its variants, often enhanced with Kalman filtering techniques [35 ###reference_b35###]. In 2009, Chandra et al. demonstrated how traffic speeds and volumes in Orlando, Florida, were influenced by both upstream and downstream location data and employed a VAR model to predict future traffic conditions [5 ###reference_b5###]. In recent years, advancements in sensor technologies have led to a significant increase in the availability of traffic flow datasets. To handle this surge of data, data-driven forecasting approaches have become mainstream in traffic prediction. For example, in 2011, Hong et al. [36 ###reference_b36###] applied a kernel-based SVR model to forecast inter-urban traffic flow in the northern Taiwan region. The main advantage of machine learning techniques over traditional methods lies in their ability to model nonlinear temporal dependencies [37 ###reference_b37###]. With the improvement of computational capabilities, deep learning architectures have also become an integral part of time series forecasting [7 ###reference_b7###, 38 ###reference_b38###]. Recurrent neural network-based models, such as long short-term memory (LSTM) networks and their variants, are widely used to capture temporal correlations in traffic flow datasets [39 ###reference_b39###]. These architectures utilize gated mechanisms to regulate information flow, which plays a crucial role in effectively capturing both short-term and long-term dependencies. Although these models offer numerous advantages over conventional forecasting methods, they struggle to incorporate spatial information. The complex spatial dependencies inherent in traffic flow data are difficult to account for using conventional forecasting approaches."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Spatiotemporal forecasting approaches",
|
| 27 |
+
"text": "In recent years, many deep learning techniques have been employed to tackle the problem of high-dimensional spatiotemporal traffic prediction. Convolutional neural networks (CNNs) have been used in traffic forecasting due to their spatial information extraction abilities; for example, [40 ###reference_b40###] converts the road network to a regular 2D grid and applies CNN to predict the flow. Nowadays, graph convolutional networks (GCNs) are used to model spatial correlations in network data [41 ###reference_b41###], which put spectral graph theory into deep neural networks. In another recent work, [42 ###reference_b42###] proposed ChebNet, which boosts GCNs with fast localized convolution filters. More recently, diffusion convolutional recurrent neural network (DCRNN) [43 ###reference_b43###] introduces graph convolutional networks into spatiotemporal network data prediction, which employs a diffusion graph convolution network to understand the information diffusion process in spatial networks, along with RNN to model temporal correlations. Spatiotemporal synchronous graph convolutional network (STSGCN) [44 ###reference_b44###] concatenated the spatial graphs of multi-neighborhood time steps. Graph-WaveNet (GWN) [45 ###reference_b45###] designed a self-adaptive matrix to understand the changes of the influence between nodes and their neighbors. It used dilated casual convolutions for the temporal correlations, thus increasing the receptive field exponentially. Adaptive graph convolutional recurrent network (AGCRN) [46 ###reference_b46###] found hidden spatial dependencies via learnable embedding from nodes. However, the spatiotemporal layers cannot be stacked to expand the receptive field. GMAN [25 ###reference_b25###] is an encoder-decoder architecture with spatial and temporal attention modules to model spatiotemporal correlations. It also has a transform attention layer between the encoder and decoder to alleviate error propagation during long-term prediction. Thus, the traffic flow forecasting problem is an emerging research area for both transportation research and machine learning communities working on spatiotemporal data structures. A highly accurate traffic forecasting system impacts our day-to-day life. Our proposed methodology can be a long-term forecasting tool for traffic data modelers."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "II-C Wavelet-based forecasters",
|
| 33 |
+
"text": "Wavelet transformation (WT) has demonstrated remarkable progress in time series analysis by enhancing the efficiency of individual forecasting methods [47 ###reference_b47###, 48 ###reference_b48###]. WT is particularly useful for extracting signals from noise in the time-frequency domain [29 ###reference_b29###]. This method decomposes a time series into high-frequency signals, which depict the details or short-term fluctuations, and low-frequency components, which capture smooth long-term trends. This time-frequency localization has made WT a valuable tool in forecasting across diverse fields, including epidemiology [49 ###reference_b49###], economics [50 ###reference_b50###], environmental studies [51 ###reference_b51###], geophysics [52 ###reference_b52###], traffic forecasting [53 ###reference_b53###], and others. In traffic forecasting, WT has been employed to remove the noise from the data, allowing for the modeling of the remaining stationary components using methods like the Kalman filter and neural networks [54 ###reference_b54###, 55 ###reference_b55###]. However, removing high-frequency components has led to discrepancies between the forecasts and ground truth data. To tackle this issue, Sun et al. [56 ###reference_b56###] applied WT on the passenger flow dataset from the Beijing subway system and modeled both the details and smooth coefficients using SVR. While this approach improved short-term forecasts, it failed to capture the spatial dependencies in the dataset. To overcome this limitation, Zhang et al. introduced the Motif-GCRNN framework for generating spatiotemporal forecasts of traffic speed in Chengdu, China [57 ###reference_b57###]. This framework applies WT to the traffic speed data and generates the corresponding high-frequency and low-frequency components. They used a graph convolution recurrent neural network (GCRNN) to model the smooth components and the ARMA model for the detail coefficients. Despite its ability to capture both smooth and detailed fluctuations in the traffic speed dataset, Motif-GCRNN struggles with dynamic changes in spatiotemporal patterns. Additionally, the linear ARMA model used for the detail coefficients is insufficient for handling the nonlinearities often present in traffic data. To address these challenges, we propose the W-DSTAGNN approach, which integrates WT with a dynamic GNN capable of handling non-stationarity, nonlinearity, and dynamic spatiotemporal patterns in traffic data."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "III Mathematical Preliminaries",
|
| 39 |
+
"text": ""
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-A Wavelet Transformation",
|
| 45 |
+
"text": "Wavelet is a \u2018small\u2019 wave-like oscillation, which is defined as a square-integrable function such that and . The second condition ensures that the wavelet is \u2018localized\u2019 in time, thus allowing it to capture both the time and frequency of a signal, unlike the Fourier transform, which is capable of capturing only the frequency of the signal. A wavelet transform converts a time series into a sequence of time-indexed observations, with each time series representing the original data in a particular frequency band. The wavelet transform can be done in two ways - continuous wavelet transform (CWT), which applies every possible wavelet to the original series, and discrete wavelet transform (DWT), which applies a finite number of wavelets at a specific time and location. In this study, we utilize the DWT approach that represents a series using an orthonormal basis and is widely used in hydrology [58 ###reference_b58###], epidemics [59 ###reference_b59###], geophysics [30 ###reference_b30###], and economics [50 ###reference_b50###], among others. The DWT uses a dyadic grid. For scale parameter and shift parameter , the equation for the decompositions using DWT is\nwhere the sum is over the entire time series, is the original time series (or signal), and is a mother wavelet."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-B Maximal Overlap Discrete Wavelet Transform (MODWT)",
|
| 51 |
+
"text": "Application of DWT requires the sample size to be exactly a power of 2. Thus, a modified version of the DWT, namely maximal overlap discrete wavelet transform (MODWT), is adopted for decomposing arbitrary time series [60 ###reference_b60###]. Both MODWT and DWT can accomplish multi-resolution analysis - a scale-based additive decomposition. However, in contrast to the usual DWT, in the MODWT, both wavelet and scaling coefficients are shift-invariant. Thus, circularly shifting the time series by any amount will circularly shift the MODWT details and smooth coefficients by a corresponding amount. This property is crucial, as it allows for the attention modules to be subjected to relatively \u2018smoother\u2019 data, which makes it easier for them to capture the underlying pattern. Also, contrary to the DWT details and smooth, the MODWT details and smooth are associated with zero-phase filters, thus allowing the extraction of true signal from noise in a multiresolution analysis of the original time series. This allows for each of the attention blocks to have meaningful weights associated with them, thus leading to a robust framework [60 ###reference_b60###].\nTo find the MODWT coefficients of level , the DWT coefficients are scaled and convolved with the original time series as follows.\nwhere are the details and scaling coefficients of DWT and . Note that all the wavelet coefficients will have the same length as the original time series. Thus, the coefficients can be expressed in a matrix notation as\nwhere and are square matrices of order consists of the wavelet and scaling filters respectively. Hence, using MODWT, the original time series can be represented as\nwhere indicates the level high frequency details and represents the low frequency trend components. For a graphical illustration of the MODWT approach, we present the MODWT wavelet and scaling coefficients obtained by applying the transformation with the Haar filter at level 2 (J = 2) on selected sensor location from all three datasets in Figure 1 ###reference_###.\n\n###figure_1### ###figure_2###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "IV Proposed Methodology: W-DSTAGNN",
|
| 57 |
+
"text": "The wavelet dynamic spatiotemporal aware graph neural network (W-DSTAGNN) architecture consists of stacked spatiotemporal attention blocks with a MODWT transformation as a pre-processing step in the temporal module and a prediction layer. To initialize the spatiotemporal blocks, we design the traffic road network in a graphical manner such that each sensor acts as a node in the graph, and the edges represent the corresponding connections between the nodes. In the W-DSTAGNN architecture, we compute the spatial association among the nodes using spatiotemporal aware distance (STAD), as proposed in [23 ###reference_b23###]. Thus the entry of the adjacency matrix () based on STAD can be represented as with being the STAD between the corresponding sensors. To ensure the sparsity level in the adjacency matrix, we set the sparsity hyperparameter such that for each node (), the number of non-zero elements is which has the maximum value. Thus, the spatiotemporal relevance graph (STRG) created using these sparse connections has the adjacency matrix with only non-zero elements. Along with STAD, we utilize the wavelet-based spatiotemporal attention block to capture the dynamic characteristics of the spatial dependencies with changes in time.\nIn the wavelet temporal attention (wTA) block, we first preprocess the data using the MODWT-based multiresolution analysis. Then, we use multi-head self-attention layers to capture the long-range correlation in the time series data. This enhances the effectiveness of modeling the dynamic temporal dependencies between the nodes. Thus, we first apply the MODWT transformation to the input and the residual attention from the previous layer to generate the corresponding details (, ) and smooth (, ) coefficients. In the W-DSTAGNN approach, we aim to apply temporal attention to by individually applying it to the details and smooth components and aggregating them using the inverse MODWT transformation. Thus, the wTA block can be mathematically represented as:\nwhere IMODWT is the Inverse MODWT, is the temporal attention applied to the details and smooth coefficients, which is defined as follows:\nwhere , , , and is just a reshape of , where is the number of time steps, is the feature dimension from the layer of the spatiotemporal block, and is the number of nodes ( is defined analogously). The spatial attention (SA) module receives as the input and applies the self-attention mechanism to compute the dynamic spatial dependencies. Mathematically, the attention output generated by the SA blocks can be represented as with\nfor , where and is the transpose of from the wTA layer. Thus, the output denotes the spatiotemporal dynamic dependencies obtained by aggregating the output of the spatiotemporal attention modules.\nThe output of the wavelet spatiotemporal attention module is then passed into the spatial convolution block, a standard spatial graph convolution module that performs graph convolution based on Chebyshev polynomial approximation using the STAG. It is responsible for fully exploiting the traffic network\u2019s topological characteristics and learning the structure-aware node features.\nwhere is learnable, , is the diagonal matrix with , is the largest eigenvalue of and is the order Chebyshev polynomial. We finally process the output from the spatial layer using a temporal-gated convolutional network.\nWe use the temporal gated convolution layer, which is composed of three Gated Tanh Units (GTU) with different receptive fields. The forecast can be obtained as with\nwhere is the convolution kernel of size , where are the first and second halves of with respect to the channel dimension and concatenation and pooling is done such that . A pictorial illustration of the W-DSTAGNN architecture is given in Figure 2 ###reference_###."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Experimental Setup",
|
| 63 |
+
"text": "In this section, we empirically evaluate the performance of the proposed W-DSTAGNN framework by conducting benchmark comparisons with state-of-the-art forecasters. The following subsections provide a brief description of the traffic forecasting datasets along with their statistical properties (Section V-A ###reference_###), the baseline models with their implementation strategies (Section V-B ###reference_###), key performance indicators (Section V-C ###reference_###), experimental setup and benchmark comparisons (Section V-D ###reference_###), the statistical significance of the experimental results (Section V-E ###reference_###), the influence of the hyperparameters (Section V-F ###reference_###), and uncertainty quantification of our proposal (Section V-G ###reference_###)."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5.1",
|
| 67 |
+
"parent_section_id": "5",
|
| 68 |
+
"section_name": "Datasets",
|
| 69 |
+
"text": "To validate the performance of the W-DSTAGNN architecture, we conduct experiments on real-world traffic forecasting benchmark datasets acquired from the Caltrans PeMS. Our datasets include the PeMS-BAY dataset curated by [43 ###reference_b43###], as well as the PeMS03 and PeMS04 datasets preprocessed by [44 ###reference_b44###]. All traffic datasets are gathered in real-time from several monitoring sensors strategically positioned throughout the freeway system across all major metropolitan areas of California, USA. The PeMS-BAY dataset accumulates information from 325 sensors in the Bay area, covering a six-month timespan from January 1, 2017, to May 31, 2017. The sensor distribution for the PeMS-BAY dataset is visualized in Figure 3 ###reference_###. For the PeMS03 dataset, 358 sensors were selected, and three months of data were collected from September 1, 2018, to November 30, 2018. For PeMS04, data from 307 selected sensors are collected for two months, spanning from January 1, 2018, to February 28, 2018. For all the datasets, aggregated traffic speed readings at 5-minute intervals are used in the experimental analysis. A summary of all datasets, including the number of sensors (nodes), number of samples, sample rate, and time range, is provided in Table I ###reference_###. Furthermore, we present a correlation heatmap of the traffic flow time series for selected sensor locations of the PeMS-BAY dataset in Figure 4 ###reference_###. As depicted in the plot, most of the time, the series monitored by different sensors that possess significant correlations with their counterparts from other sensors. This highlights the presence of spatial and temporal interdependency in the dataset. For preprocessing the datasets, we apply normalization as to ensure zero mean and unit variance before training our forecasting models. In addition, we study several global features of these time series as listed below:\nStationarity is a fundamental property of time series data, ensuring that its statistical characteristics, such as mean and variance, remain constant over time. This property is essential for maintaining the forecastability of the series and is a key assumption in various forecasting models. To assess the stationarity of the time series dataset, we use the Kwiatkowski\u2013Phillips\u2013Schmidt\u2013Shin (KPSS) test, implemented via the \u2018kpss.test\u2019 function from the tseries package in R.\nLinearity is another important property of a time series for determining the appropriate forecasting model. A linear time series indicates that the data-generating process follows linear patterns. In this study, we apply Teraesvirta\u2019s neural network test to assess nonlinearity, using the \u2018nonlinearityTest\u2019 function from the nonlinearTseries package in R.\nLong-term dependency of a time series plays a significant role in probabilistic time series modeling. To determine the long-range dependency of the time series datasets, we compute the Hurst exponent using the \u2018hurstexp\u2019 function of pracma package in R.\nSeasonality of a time series indicates the repeating patterns of the series at regular intervals. To detect these recurring fluctuations in the dataset, we implement Ollech and Webel\u2019s combined seasonality test using the \u2018isSeasonal\u2019 function from the seastests package in R.\nNormality assumption of the observations plays a crucial role in the methodological development of statistical models in time series analysis. In our study, we perform the Anderson-Darling normality test using the \u2018ad.test\u2019 function from the nortest package in R to detect any departure of the time series observations from the normality assumption.\nOn performing the above-mentioned statistical tests at 5% level of significance, we compute the global characteristics of the datasets and report the values in Table I ###reference_###. As the table highlights, all the time series observations from different sensor locations are long-term dependent and non-normal. Most time series from different datasets are non-linear and have seasonal patterns. Additionally, some of the series demonstrate non-stationary behavior.\n\n###figure_3### ###figure_4###"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.2",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "Baseline models",
|
| 75 |
+
"text": "In this section, we briefly explain the baseline models used in our experimental analysis and discuss their implementation strategies as adopted from [43 ###reference_b43###].\nAutoregressive Integrated Moving Average (ARIMA) is a classical time series forecasting approach used for tracking the linear trajectories in a time series data [35 ###reference_b35###]. This framework applies differencing to obtain stationarity and models the lagged values of the original time series with the lagged error observations to generate forecasts. Implementation of the ARIMA model (with Kalman filter) is done with three lagged values and one lagged error using the statsmodel [61 ###reference_b61###] python package.\nSupport Vector Regression (SVR) is a supervised machine learning technique that fits an optimal hyperplane to forecast the time series observations [62 ###reference_b62###]. To fit the SVR model, we transform the dataset from each node into a supervised setup, such that the future observations of the series relate to its previous 5 observations. Based on this transformed dataset, we fit the radial basis kernel-based SVR model with the loss penalty term and generate the multi-step ahead forecasts using the sktime python package.\nVector Autoregressive (VAR) model is a linear multivariate forecasting technique that can model the pairwise relationships among different time series [63 ###reference_b63###]. This framework is a generalization of the univariate ARIMA model with the capability of incorporating feedback relationships from other variables. This framework treats each time series as an endogenous variable and utilizes auxiliary information from other time series to generate the corresponding forecasts. Implementation is done by setting the number of lags to 3, using the statsmodel [61 ###reference_b61###] python package.\nFC-LSTM [39 ###reference_b39###] is an encoder-decoder framework using LSTM with peephole. There are two recurrent layers in the encoder and the decoder. In each recurrent layer, there are 256 LSTM units. The L1 weight decay is and L2 weight decay is . The batch size is 64, and the loss function is MAE. The initial learning rate is , which reduces by every 10 epochs starting from the 20th epoch. Early stop is also performed by looking at the validation error.\nDiffusion Convolutional Recurrent Neural Network (DCRNN) model is a spatiotemporal forecasting technique that integrates bidirectional random walks on the graphs with recurrent neural networks [43 ###reference_b43###]. This architecture performs diffusion convolution on the graphs to capture the spatial dependencies and model them using an encoder-decoder architecture with scheduled sampling techniques to generate long-term spatiotemporal forecasts. In our study, we adopt the implementation of the DCRNN approach from the open-access GitHub repository ###reference_### of [43 ###reference_b43###].\nSpatial-Temporal Graph Convolutional Networks (STGCN) combine graph convolution with gated temporal convolutions to effectively capture both spatial and temporal dependencies in spatiotemporal datasets [64 ###reference_b64###]. The model consists of multiple convolutional layers, allowing faster training and fewer parameters. The implementation of the STGCN model is based on the GitHub repository ###reference_18### provided by [64 ###reference_b64###].\nSpatial-Temporal Synchronous Graph Convolutional Networks (STSGCN) is a robust spatiotemporal modeling technique that captures the localized spatiotemporal correlations along with the heterogeneities in the dataset [44 ###reference_b44###]. This framework applies several graph convolution operations to model the spatial dependencies and utilizes two fully connected layers for the temporal patterns. We adopt the implementation from the GitHub repository ###reference_### of [44 ###reference_b44###] to apply the STSGCN model.\nGraphWavenet (GWN), introduced in [45 ###reference_b45###], integrates graph convolution with dilated casual convolution to understand spatiotemporal dependencies. This model develops an adaptive dependency matrix through node embeddings to capture the hidden spatiotemporal patterns in the dataset efficiently. This framework employs a stacked 1D convolution layer with many receptive fields to model long-range dependencies in the temporal window. To implement this model, we adopted the code available at the GitHub repository ###reference_### of [45 ###reference_b45###].\nAdaptive Graph Convolutional Recurrent Network (AGCRN) framework performs spatiotemporal forecasting by integrating three key components: a Node Adaptive Parameter Learning (NAPL) module, a Data-Adaptive Graph Generation (DAGG) module, and a recurrent network [46 ###reference_b46###]. The NAPL and DAGG modules are designed to capture node-specific patterns and the interdependencies between the traffic series. At the same time, the recurrent network focuses on modeling the temporal dynamics within the dataset. The implementation of the AGCRN model is based on the GitHub repository ###reference_github.com/LeiBAI/AGCRN### provided by [46 ###reference_b46###].\nGraph Multi-Attention Network (GMAN) employs an encoder-decoder architecture with a transformer-based attention mechanism for spatiotemporal forecasting [25 ###reference_b25###]. The encoder consists of multiple spatiotemporal blocks that process the input, which is then transformed using the attention mechanism. The spatiotemporal attention blocks within the decoder generate the forecasts. The implementation of the GMAN model is based on the GitHub repository ###reference_### provided by [25 ###reference_b25###].\nDynamic Spatial-Temporal Aware Graph Neural Network (DSTAGNN) introduces a data-driven approach to capture complex dynamic spatiotemporal dependencies in road networks [23 ###reference_b23###]. This modified architecture leverages a multi-head attention mechanism to capture dynamic spatial relationships among nodes, while multi-scale gated convolutions are used to model dynamic temporal patterns. The implementation of DSTAGNN is based on the GitHub repository ###reference_### of [23 ###reference_b23###]."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.3",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "Performance measure",
|
| 81 |
+
"text": "To measure the performance of different forecasting frameworks, we use three performance indicators, namely, mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean squared error (RMSE). These metrics can be computed as:\nwhere is the testing data (ground truth), and is the corresponding forecast. By general convention, the model with the least performance measure is the \u2018best\u2019 forecasting model. We reported the testing errors computed between the test data and the data forecasted by the model."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.4",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "Experimental setup and performance comparison",
|
| 87 |
+
"text": "In the experimental setup, to ensure a fair comparison with the baseline models, we apply a train-validation-test split to our spatiotemporal datasets. The PeMS-BAY dataset is split in a 7:1:2 ratio, while the PeMS03 and PeMS04 datasets are divided using a 6:2:2 ratio. For forecasting, we use one hour of historical data to predict traffic flow for the following hour. The training of the W-DSTAGNN and baseline models on the spatiotemporal datasets is performed using the T4 GPU on Google Colab Pro+. Based on the validation loss, we tune the hyperparameters in the W-DSTAGNN framework. In our experiments, the order of Chebyshev\u2019s polynomial () is set to 3, indicating that the spatial attention layer uses 3 attention heads. The temporal gated convolution layer employs 32 convolution kernels of sizes = , along with a pooling layer of window size 2. Additionally, the spatial graph convolution layer utilizes 32 convolution kernels. In the wavelet temporal attention layer, we apply wavelet decomposition of level 2 and use 3 attention heads. The model architecture consists of 4 stacked spatiotemporal blocks, each containing a spatiotemporal attention module with 32 attention heads. For training, we utilize the Huber loss function with the Adam optimizer. The model is trained for 100 epochs with a learning rate of and a batch size of 32. For certain baseline models, we use the forecasts reported in the seminal works of [44 ###reference_b44###, 43 ###reference_b43###, 23 ###reference_b23###].\nTable II ###reference_### presents the performance comparison of the proposed W-DSTAGNN architecture with various baseline methods for forecasting 1-hour (12 steps) ahead traffic conditions across different locations. The results highlight that W-DSTAGNN consistently provides more accurate forecasts for the traffic flow datasets based on different key performance metrics. Specifically, for the PeMS-BAY dataset, W-DSTAGNN outperforms other models with over 96% forecast accuracy in terms of the MAPE metric. Similarly, the RMSE and the MAE metrics demonstrate the superiority of our model. In the PeMS03 dataset, W-DSTAGNN achieves the best forecasting performance across all three accuracy measures, highlighting its robustness. For the PeMS04 dataset, the DSTAGNN model shows competitive results with W-DSTAGNN based on MAPE values and GMAN slightly surpasses W-DSTAGNN with a margin of 1.34% for the MAE metric. Nevertheless, W-DSTAGNN still achieves the highest accuracy for the RMSE metric. Moreover, from the above experimental results, it is evident that the performance of the conventional architectures like ARIMA, SVR, VAR, and FC-LSTM drastically drops in comparison to the proposed W-DSTAGNN approach. This is due to their ability to capture only the temporal correlations, ignoring the spatial dependencies. However, for the other spatiotemporal models, their better accuracy measures over the temporal architectures highlight the importance of modeling spatiotemporal dependencies. Moreover, to emphasize the importance of wavelet transformation in the W-DSTAGNN model, we compare the performance improvement of W-DSTAGNN over standard DSTAGNN by computing\nThe performance enhancement values as reported in table II ###reference_### indicate that the W-DSTAGNN framework improves the RMSE score by a maximum of 2.51%, MAPE by 1.53%, and MAE by 1.67% of DSTAGNN. This improvement in the performance measures is primarily attributed to the use of MODWT decomposition in the DSTAGNN architecture, which helps segregate signals from noise in the input data. A higher node count allows for redundancies in the form of relations between the nodes, which do not get removed by MODWT. Thus, the temporal attention block is allowed to look for patterns in the transformed data, while the spatial attention is free to look into the intricate inter-node patterns that were missed by temporal attention, leading to better learning of the overall pattern in the dataset. A graphical illustration of the ground truth, along with the forecasts generated by the DSTAGNN and W-DSTAGNN for the first testing day of sensor 17 of PEMS-BAY, are presented in Figure 5 ###reference_###. Additionally, we present the MAE values of DSTAGNN and W-DSTAGNN models in each 5-minute interval for a 1-hour forecast period in Figure 6 ###reference_###.\n\n###figure_5### \n###figure_6###"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.5",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Statistical Significance",
|
| 93 |
+
"text": "Furthermore, to validate the statistical significance of our experimental evaluations, we used multiple comparisons with the best (MCB) test [65 ###reference_b65###]. This non-parametric test ranks the models based on their relative performance in terms of a specific metric and identifies the model with the minimum rank as the \u2018best\u2019 performing approach. In the subsequent step, it determines the critical distance for each of the competing forecasters as where represents the number of datasets and is the critical value of the Tukey distribution at level . This distribution-free test treats the critical distance of the \u2018best\u2019 performing model as the reference value of the test and compares the performance of the other models with this value. Figure 7 ###reference_### presents the MCB test result computed based on the MAE metric. This plot highlights that our proposal is the \u2018best\u2019 performing model as it achieves the minimum average rank of 1.67, followed by other spatiotemporal architectures. Moreover, the critical distance of the W-DSTAGNN model (shaded region) represents the reference value of the test. Since the critical distance of most of the temporal forecasting models lies well above the reference value, we can conclude that their performance is significantly inferior to the W-DSTAGNN model.\n\n###figure_7###"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.6",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Effect of Hyperparameters",
|
| 99 |
+
"text": "To ensure a fair comparison between the proposed W-DSTAGNN and the baseline DSTAGNN approaches, we use the same hyperparameters (as discussed in Section V-D ###reference_###) for both models. Additionally, we investigate the impact of different MODWT decomposition levels on the forecast performance of the W-DSTAGNN architecture. Increasing the MODWT level enhances the number of temporal attention blocks, consequently increasing the training time. To optimize forecast performance with computational complexity, we limit the decomposition to 3 levels. Figure 8 ###reference_### illustrates the forecast performance of W-DSTAGNN using the MAPE metric for various MODWT levels. As evident from the plot, the second-level decomposition, with one low-frequency and two high-frequency components, achieves the best performance across all traffic datasets. This result suggests that the true signals in traffic flow data are effectively captured with a smooth series and two detail series, where the first detail series represents the most rapid fluctuations, the second captures moderate variations, and the remaining fluctuations can be treated as noise.\n\n###figure_8###"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.7",
|
| 103 |
+
"parent_section_id": "5",
|
| 104 |
+
"section_name": "Conformal Predictions",
|
| 105 |
+
"text": "Alongside the point estimates, we utilize the conformal prediction approach to quantify the uncertainties associated with our proposal. The conformal prediction approach translates point estimates into prediction regions in a distribution-free, model-agnostic manner, guaranteeing convergence [66 ###reference_b66###]. In the time series forecasting setup, this method leverages the sequential nature of the time series dataset. Given the input series from a sensor, we fit the W-DSTAGNN and the uncertainty model on its lagged observations to generate the scalar notion of uncertainty. Thus, the conformal score can be computed as follows:\nSince possess a sequential pattern, thus we utilize a weighted conformal method with a fixed -sized window to compute the conformal quantile as\nThus the conformal prediction interval based on these weighted quantiles is given by:\nIn this study, we compute the conformal prediction interval with uncertainty quantification capacity for the first testing day of selected sensor locations of all three datasets and present it in Figure 9 ###reference_###. To restrict data leakage and generate reliable prediction intervals, we calculate the residuals of the trained model that are applied to a calibration (validation) set.\n\n###figure_9###"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "6",
|
| 109 |
+
"parent_section_id": null,
|
| 110 |
+
"section_name": "VI Limitations and future scope of this study",
|
| 111 |
+
"text": "As our proposed spatiotemporal forecasting approach has two key components namely wavelet decomposition and dynamic spatiotemporal GNN, therefore, there are a few limitations of the proposal. The proposed model\u2019s complexity is higher than the state-of-the-art DSTAGNN model which might obstruct the scalability of the proposal for very large datasets. However, for medium and small sample-sized datasets this problem will not arise. In addition to this, the improvement in terms of RMSE is around 2%. This is because we performed all the experiments based on the benchmark papers where a 1-hour ahead (12 steps) forecast window is used for the performance evaluation. As our proposal is most suited for long-term forecasting, therefore, we may expect more significant improvement in terms of performance metrics for longer forecast horizons. This study promises several future scopes of research:\n(a) Implementation of our method for long-range spatiotemporal forecasting of traffic or other domain-specific datasets;\n(b) Incorporation of other causal variables that impact traffic flow inside the forecasting framework;\n(c) Improvement of W-DSTAGNN using faster versions of spherical harmonic transformation that results in reduced computational complexity."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "7",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "VII Conclusion",
|
| 117 |
+
"text": "In this paper, we presented a spatiotemporal deep learning model to perform traffic forecasting integrating wavelet decomposition with a temporal attention mechanism. Our ensemble approach outperformed other state-of-the-art models on several real-world traffic flow datasets, specifying their potential to tour spatiotemporal structures from the input time series. The key advantage of our proposed W-DSTAGNN method is its capacity to generate accurate and reliable long-term forecasts of traffic flows and prediction intervals for business deployment. Wavelets used within our framework act as catalysts to tackle the input time series\u2019s non-Gaussian and long-range dependence structures. This ensemble framework can be useful for other potential application areas, such as spatiotemporal predictions of epidemics or studying the evolving behavior of social networks. We tested our method using statistical tests to verify its robustness over benchmark models. Apart from point forecasts, our proposed model can generate interval forecasts that significantly contribute to the probability forecasting of traffic datasets."
|
| 118 |
+
}
|
| 119 |
+
],
|
| 120 |
+
"appendix": [],
|
| 121 |
+
"tables": {
|
| 122 |
+
"1": {
|
| 123 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Summary of the datasets with statistical features (values indicate the number of time series from different nodes that exhibit the statistical characteristic)</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.1\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.2\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">Nodes</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.3\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">Observations</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.4\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">Granularity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.5\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">Time span</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.6\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">Stationarity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.7\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">Linearity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.8\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">Long-term dependency</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.9\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">Seasonality</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1.10\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">Non-Normal</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.1.1\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">PeMS-BAY</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.1.2\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">325</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.1.3\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">52116</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.1.4\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">5 min</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.1.5\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">01/01/17-31/05/17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.1.6\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.1.7\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.1.8\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">325</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.1.9\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">264</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.1.10\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">325</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S5.T1.1.3.2.1\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">PeMS03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.3.2.2\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">358</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.3.2.3\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">26209</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.3.2.4\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">5 min</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.3.2.5\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">01/09/18-30/11/18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.3.2.6\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">287</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.3.2.7\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.3.2.8\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">358</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.3.2.9\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">358</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.3.2.10\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">358</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T1.1.4.3.1\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">PeMS04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.1.4.3.2\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">307</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.1.4.3.3\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">16992</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.1.4.3.4\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">5 min</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.1.4.3.5\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">01/01/18-28/02/18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.1.4.3.6\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">270</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.1.4.3.7\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.1.4.3.8\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">307</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.1.4.3.9\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">307</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.1.4.3.10\" style=\"padding-left:4.5pt;padding-right:4.5pt;\">307</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 124 |
+
"capture": "TABLE I: Summary of the datasets with statistical features (values indicate the number of time series from different nodes that exhibit the statistical characteristic)"
|
| 125 |
+
},
|
| 126 |
+
"2": {
|
| 127 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Experiment Results show that the proposed W-DSTAGNN model <span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.2.1\">outperforms</span> all baseline models. \n<br class=\"ltx_break\"/>(* denotes reimplementation)</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.1\">\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_t\" id=\"S5.T2.3.1.1.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">Baselines</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_t\" colspan=\"3\" id=\"S5.T2.3.1.1.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">PeMS-BAY</th>\n<th class=\"ltx_td ltx_nopad_l ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_t\" colspan=\"3\" id=\"S5.T2.3.1.1.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">PeMS03</th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T2.3.1.1.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">PeMS04</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.2.2\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.2.2.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"></th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.3.2.2.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">MAE</th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.3.2.2.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">MAPE(%)</th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.3.2.2.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">RMSE</th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.3.2.2.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">MAE</th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.3.2.2.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">MAPE(%)</th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.3.2.2.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">RMSE</th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.3.2.2.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">MAE</th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.3.2.2.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">MAPE(%)</th>\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.3.2.2.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">RMSE</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3.1\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.1.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">ARIMA\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib35\" title=\"\">35</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">3.38</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">8.30</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.1.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">6.50</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">35.31</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">33.78</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.1.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">47.59</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">33.73</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">24.18</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.1.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">48.80</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.4.2\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.4.2.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">SVR\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib62\" title=\"\">62</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.4.2.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">3.28</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.4.2.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">8.00</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.4.2.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">7.08</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.4.2.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">21.97</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.4.2.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">21.51</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.4.2.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">35.29</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.4.2.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">28.70</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.4.2.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">19.20</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.4.2.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">44.56</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.5.3\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.5.3.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">VAR\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib63\" title=\"\">63</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.5.3.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">2.93</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.5.3.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">6.50</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.5.3.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">5.44</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.5.3.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">23.65</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.5.3.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">24.51</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.5.3.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">38.26</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.5.3.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">23.75</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.5.3.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">18.09</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.5.3.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">36.66</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.6.4\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.6.4.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">FC-LSTM\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib39\" title=\"\">39</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.6.4.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">2.37</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.6.4.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">5.70</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.6.4.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.96</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.6.4.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">21.33</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.6.4.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">22.33</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.6.4.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">35.11</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.6.4.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">26.24</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.6.4.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">19.30</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.6.4.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">40.49</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.7.5\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.7.5.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">DCRNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib43\" title=\"\">43</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.7.5.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">2.07</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.7.5.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.90</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.7.5.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.74</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.7.5.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">18.18</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.7.5.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">18.91</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.7.5.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">30.31</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.7.5.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">24.70</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.7.5.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">17.12</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.7.5.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">38.12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.8.6\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.8.6.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">STGCN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib64\" title=\"\">64</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.8.6.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">2.49</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.8.6.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">5.79</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.8.6.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">5.69</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.8.6.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">17.49</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.8.6.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">17.15</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.8.6.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">30.12</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.8.6.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">22.70</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.8.6.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">14.59</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.8.6.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">35.55</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.9.7\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.9.7.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">STSGCN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib44\" title=\"\">44</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.9.7.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">2.11</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.9.7.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.96</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.9.7.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.85</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.9.7.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">17.48</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.9.7.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">16.78</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.9.7.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">29.21</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.9.7.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">21.19</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.9.7.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">13.90</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.9.7.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">33.65</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.10.8\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.10.8.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">GWN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib45\" title=\"\">45</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.10.8.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">1.95</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.10.8.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.63</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.10.8.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.52</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.10.8.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">19.85</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.10.8.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">19.31</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.10.8.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">32.94</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.10.8.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">25.45</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.10.8.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">17.29</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.10.8.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">39.70</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.11.9\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.11.9.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">AGCRN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib46\" title=\"\">46</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.11.9.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">1.96</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.11.9.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.64</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.11.9.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.54</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.11.9.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">15.98</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.11.9.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">15.23</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.11.9.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">28.25</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.11.9.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">19.83</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.11.9.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">12.97</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.11.9.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">32.26</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.12.10\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.12.10.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">GMAN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib25\" title=\"\">25</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.12.10.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">1.86</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.12.10.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.31</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.12.10.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">4.32</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.12.10.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">16.87</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.12.10.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">18.23</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.12.10.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">27.92</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.12.10.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.12.10.8.1\">19.14</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.12.10.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">13.19</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.12.10.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">31.60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.13.11\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.13.11.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">DSTAGNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.04440v2#bib.bib23\" title=\"\">23</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.13.11.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">1.72*</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.13.11.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">3.92*</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.13.11.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">3.98*</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.13.11.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">15.57</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.13.11.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">14.68</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.13.11.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">27.21</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.13.11.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">19.30</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.13.11.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.13.11.9.1\">12.70</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.13.11.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">31.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.14.12\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S5.T2.3.14.12.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.14.12.1.1\">W-DSTAGNN</span> (Proposed)</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.14.12.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.14.12.2.1\">1.70</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.14.12.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.14.12.3.1\">3.86</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.14.12.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.14.12.4.1\">3.88</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.14.12.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.14.12.5.1\">15.31</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.14.12.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.14.12.6.1\">14.49</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.14.12.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.14.12.7.1\">26.59</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.14.12.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">19.30</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center\" id=\"S5.T2.3.14.12.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.14.12.9.1\">12.70</span></td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_r\" id=\"S5.T2.3.14.12.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.3.14.12.10.1\">31.28</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.15.13\">\n<th class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T2.3.15.13.1\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">Performance Improvement</th>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_b\" id=\"S5.T2.3.15.13.2\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">1.16%</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_b\" id=\"S5.T2.3.15.13.3\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">1.53%</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.3.15.13.4\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">2.51%</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_b\" id=\"S5.T2.3.15.13.5\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">1.67 %</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_b\" id=\"S5.T2.3.15.13.6\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">1.29%</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.3.15.13.7\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">2.28%</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_b\" id=\"S5.T2.3.15.13.8\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">0.00%</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_b\" id=\"S5.T2.3.15.13.9\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">0.00%</td>\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.3.15.13.10\" style=\"padding-left:0.0pt;padding-right:0.0pt;\">0.57%</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 128 |
+
"capture": "TABLE II: Experiment Results show that the proposed W-DSTAGNN model outperforms all baseline models. \n(* denotes reimplementation)"
|
| 129 |
+
}
|
| 130 |
+
},
|
| 131 |
+
"image_paths": {
|
| 132 |
+
"1": {
|
| 133 |
+
"figure_path": "2407.04440v2_figure_1.png",
|
| 134 |
+
"caption": "Figure 1: Traffic flow data (maroon) alongside its MODWT smooth (purple) and two details (green and orange) coefficients obtained at level 2 decomposition using a Haar wavelet filter. This data represents the traffic flow monitored by the third sensor of (a) PeMS-BAY, (b) PeMS03, and (c) PeMS04 dataset during the first three days of the training period.",
|
| 135 |
+
"url": "http://arxiv.org/html/2407.04440v2/extracted/5869715/MODWT_DATAset.png"
|
| 136 |
+
},
|
| 137 |
+
"2": {
|
| 138 |
+
"figure_path": "2407.04440v2_figure_2.png",
|
| 139 |
+
"caption": "Figure 2: Detailed framework of the proposed W-DSTAGNN model.",
|
| 140 |
+
"url": "http://arxiv.org/html/2407.04440v2/x1.png"
|
| 141 |
+
},
|
| 142 |
+
"3": {
|
| 143 |
+
"figure_path": "2407.04440v2_figure_3.png",
|
| 144 |
+
"caption": "Figure 3: Sensor distribution of PeMS-BAY dataset.",
|
| 145 |
+
"url": "http://arxiv.org/html/2407.04440v2/extracted/5869715/map.png"
|
| 146 |
+
},
|
| 147 |
+
"4": {
|
| 148 |
+
"figure_path": "2407.04440v2_figure_4.png",
|
| 149 |
+
"caption": "Figure 4: Heatmap of the correlation values for selected 50 sensor locations of PeMS-BAY dataset.",
|
| 150 |
+
"url": "http://arxiv.org/html/2407.04440v2/extracted/5869715/Heatmap_Corr_Plot.png"
|
| 151 |
+
},
|
| 152 |
+
"5": {
|
| 153 |
+
"figure_path": "2407.04440v2_figure_5.png",
|
| 154 |
+
"caption": "Figure 5: Ground truth data for node 17 of PeMS-BAY on 1st testing day (black) and their corresponding forecasts generated by W-DSTAGNN (red) and DSTAGNN (blue).",
|
| 155 |
+
"url": "http://arxiv.org/html/2407.04440v2/x2.png"
|
| 156 |
+
},
|
| 157 |
+
"6": {
|
| 158 |
+
"figure_path": "2407.04440v2_figure_6.png",
|
| 159 |
+
"caption": "Figure 6: Step-wise forecast error (MAE) comparison between DSTAGNN (red) and W-DSTAGNN (blue) for the PeMS-BAY dataset.",
|
| 160 |
+
"url": "http://arxiv.org/html/2407.04440v2/x3.png"
|
| 161 |
+
},
|
| 162 |
+
"7": {
|
| 163 |
+
"figure_path": "2407.04440v2_figure_7.png",
|
| 164 |
+
"caption": "Figure 7: Multiple comparisons with the best analysis for the three benchmark datasets in terms of MAE metric. In the plot, W-DSTAGNN - 1.67 indicates that the average rank of W-DSTAGNN is 1.67, similar to others.",
|
| 165 |
+
"url": "http://arxiv.org/html/2407.04440v2/extracted/5869715/MCB_MAE_Plot.png"
|
| 166 |
+
},
|
| 167 |
+
"8": {
|
| 168 |
+
"figure_path": "2407.04440v2_figure_8.png",
|
| 169 |
+
"caption": "Figure 8: Forecast performance (MAPE) of W-DSTAGNN for PeMS-BAY (orange), PeMS03 (green), and PeMS04 (yellow) datasets with varying the level of MODWT decomposition in the wavelet temporal attention block.",
|
| 170 |
+
"url": "http://arxiv.org/html/2407.04440v2/extracted/5869715/Level_MAPE.png"
|
| 171 |
+
},
|
| 172 |
+
"9": {
|
| 173 |
+
"figure_path": "2407.04440v2_figure_9.png",
|
| 174 |
+
"caption": "Figure 9: Ground truth traffic dataset (red line) with the corresponding point forecasts (blue line), and 90% conformal prediction interval (blue shaded region) generated by the W-DSTAGNN architecture for the first testing day of (a) PeMS-BAY (node 56), (b) PeMS03 (node 1), and (c) PeMS04 (node 1) datasets.",
|
| 175 |
+
"url": "http://arxiv.org/html/2407.04440v2/extracted/5869715/Conformal_Plot_All.png"
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
"validation": true,
|
| 179 |
+
"references": [
|
| 180 |
+
{
|
| 181 |
+
"1": {
|
| 182 |
+
"title": "Courier Corporation, 1995.",
|
| 183 |
+
"author": "I. N. Sneddon, Fourier transforms.",
|
| 184 |
+
"venue": null,
|
| 185 |
+
"url": null
|
| 186 |
+
}
|
| 187 |
+
},
|
| 188 |
+
{
|
| 189 |
+
"2": {
|
| 190 |
+
"title": "Cambridge university press, 2000.",
|
| 191 |
+
"author": "D. B. Percival and A. T. Walden, Wavelet methods for time series\nanalysis, vol. 4.",
|
| 192 |
+
"venue": null,
|
| 193 |
+
"url": null
|
| 194 |
+
}
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"3": {
|
| 198 |
+
"title": "OTexts, 2018.",
|
| 199 |
+
"author": "R. Hyndman, Forecasting: principles and practice.",
|
| 200 |
+
"venue": null,
|
| 201 |
+
"url": null
|
| 202 |
+
}
|
| 203 |
+
},
|
| 204 |
+
{
|
| 205 |
+
"4": {
|
| 206 |
+
"title": "John Wiley & Sons, 2015.",
|
| 207 |
+
"author": "G. E. Box, G. M. Jenkins, G. C. Reinsel, and G. M. Ljung, Time series\nanalysis: forecasting and control.",
|
| 208 |
+
"venue": null,
|
| 209 |
+
"url": null
|
| 210 |
+
}
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"5": {
|
| 214 |
+
"title": "Princeton university press, 2020.",
|
| 215 |
+
"author": "J. D. Hamilton, Time series analysis.",
|
| 216 |
+
"venue": null,
|
| 217 |
+
"url": null
|
| 218 |
+
}
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"6": {
|
| 222 |
+
"title": "Springer, 2005.",
|
| 223 |
+
"author": "V. Vovk, A. Gammerman, and G. Shafer, Algorithmic learning in a random\nworld, vol. 29.",
|
| 224 |
+
"venue": null,
|
| 225 |
+
"url": null
|
| 226 |
+
}
|
| 227 |
+
}
|
| 228 |
+
],
|
| 229 |
+
"url": "http://arxiv.org/html/2407.04440v2"
|
| 230 |
+
}
|
20240921/2407.08742v4.json
ADDED
|
@@ -0,0 +1,453 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Improved Robustness and Hyperparameter Selection in the Dense Associative Memory",
|
| 3 |
+
"abstract": "The Dense Associative Memory generalizes the Hopfield network by allowing for sharper interaction functions. This increases the capacity of the network as an autoassociative memory as nearby learned attractors will not interfere with one another. However, the implementation of the network relies on applying large exponents to the dot product of memory vectors and probe vectors. If the dimension of the data is large the calculation can be very large and result in imprecisions and overflow when using floating point numbers in a practical implementation. We describe the computational issues in detail, modify the original network description to mitigate the problem, and show the modification will not alter the networks\u2019 dynamics during update or training. We also show our modification greatly improves hyperparameter selection for the Dense Associative Memory, removing dependence on the interaction vertex and resulting in an optimal region of hyperparameters that does not significantly change with the interaction vertex as it does in the original network.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Autoassociative memories are a class of neural networks that learn to remember states, typically also allowing nearby states to iterate towards similar learned states. These networks act as memories for the learned states, reconstructing lost information and correcting errors in probe states. The Hopfield network \\citepHopfield1982, Hopfield1984 is perhaps the most studied model in the class. However, as with all autoassociative memories, the Hopfield network suffers from capacity issues \u2014 the number of states that can be stored in a network without error is limited. In the Hopfield network with Hebbian learning, this has been shown to be roughly for a network of dimension \\citepMcEliece1987, Hertz1991. The Dense Associative Memory, also known as the modern Hopfield network, generalizes the classical Hopfield network by introducing an interaction function parameterized by an interaction vertex \\citepKrotovHopfield2016, KrotovHopfield2018. This function controls the range of the influence for learned states, allowing control of the sizes of the attractors and increasing the network capacity. \\citetKrotovHopfield2016 also introduce several other generalizations which are parameterized by additional hyperparameters relating to learning, including the initial learning rate, learning rate decay, momentum, learning temperature and the exponent on the error term. Additional hyperparameters were introduced such as the form of the interaction function, the number of memory vectors, and more. In effect, the Dense Associative Memory is a potentially more powerful autoassociative memory, but at the cost of increased complexity and reliance on hyperparameter tuning.\nWe focus on the implementation details of the Dense Associative Memory. In particular, we show the exact form given by Krotov and Hopfield suffers from issues relating to computation and numerical stability. The current form calculates the dot product between two vectors of length then immediately applies a potentially large exponentiation based on the interaction function. This can cause inaccuracies in the floating point numbers used for computation, or even completely overflow them. In Section 4 ###reference_### we show a modification to the original form \u2014 a normalization and shifting of scaling factors \u2014 that prevents the computational problems, and prove that the modifications do not change the network behavior for a specific class of interaction functions: homogenous functions. Fortunately, the typical interaction functions \u2014 the polynomial interaction function (Equation 5 ###reference_###) and rectified polynomial interaction function (Equation 6 ###reference_###) \u2014 are in this class. We show our modifications do not alter the properties of the autoassociative memory, such as the capacity, but do appear to have emergent effects on the network over the course of training. In Section 5 ###reference_### we provide experimental results that show our modified network has a stable region of optimal hyperparameters across a wide range of interaction vertices. This is in comparison to the original network which had the optimal hyperparameters shift dramatically as the interaction vertex changed even for the same dataset. We also show that the optimal region of hyperparameters is no longer heavily dependent on the size of the data vectors, meaning applying the Dense Associative Memory to a new task will not require massively retuning the hyperparameter selections."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Literature Review",
|
| 15 |
+
"text": "Our proposed method of shifting the scaling factors within the interaction function does not appear to have been proposed previously, and other implementations of the Dense Associative Memory do not seem to have included it. However, many implementations of the Dense Associative Memory use the feed-forward equivalence set forth by Krotov and Hopfield \\citepKrotovHopfield2016. This equivalence allows the Dense Associative Memory to be expressed with some approximations as a feed-forward densely connected neural network with a single hidden layer. This architecture is much easier to implement using traditional deep learning libraries. The feed-forward equivalent model implicitly implements our proposed changes by selecting values of the scaling factor that negate terms arising from a Taylor expansion. This may explain why the feed-forward version of the model is more stable than the auto-associative version.\nNormalization is a typical operation in neural networks. In autoassociative memories specifically, we may apply a normalization term to provide a constant power throughout network calculations, which ensures calculations are proportional only to the magnitudes of the learned weights rather than the magnitude of the probe vector. Even more specifically, in the Hopfield network this is typically achieved by using binary valued vectors. It has been shown networks using these vectors have the same behavior as networks using graded (continuous value) neurons \\citepHopfield1984. Normalization may also be applied in the learning rule, such as in the Hebbian learning rule \\citepHebb1949. Normalization in learning may be used to simply scale the weights into something more interpretable, as in the Hebbian, or to achieve a different behavior during training. For example, batch normalization aims to improve training by normalizing the inputs to a layer across a batch \u2014 allowing the network to focus only on the variations in training data rather than the potentially overwhelming average signal \\citepIoffe2015. Layer normalization is a technique used in training recurrent neural networks and removes the dependence on batch size \\citepBa2016. These normalizations techniques are more complex than what we suggest. Our modifications are not aiming to supersede these techniques in the Dense Associative Memory but simply improve network stability and practicality on an implementation level. Moreover, our suggestions do not exclude the possibility of using these other normalization techniques.\nNetworks related to the Dense Associative Memory have employed some normalization techniques in a similar manner to our work. Perhaps most closely related is the continuous, attention-like Hopfield network \\citepRamsauer2021 which has shown promising results in the realm of transformer architectures. Ramsauer et al. normalize the similarity scores as we do but work over a slightly different domain: spherical vectors rather than bipolar vectors. While the vector magnitude is still constant, the network has changed rather significantly from the one introduced by Krotov and Hopfield which may slightly change the arguments we make below, although likely not considerably. However, we note that no analysis of the network stability in relation to floating point accuracy is made, and the remainder of our modifications are not applied (e.g. shifting scaling factors inside the interaction function), which our work expands on considerably. Further works have performed a similar normalization, showing there is a trend of applying this technique in network implementation \u2014 albeit without noting why it is useful for network stability \\citepMillidge2022, Liang2022, AlonsoKrichmar2024. Literature on Dense Associative Memory applications and derivatives discuss normalization either in a separate context or only tangentially. Extensive work has been done on contrastive normalization (a biologically plausible explanation of network behavior) in the Dense Associative Memory and its relation to the restricted Boltzmann machine \\citepKrotovHopfield2021. Other works employ some of the more advanced normalization techniques, including some we discuss above such as layer normalization by treating the Dense Associative Memory as a deep recurrent network \\citepSeidl2021. Again, these works do not consider shifting the scaling factors within the interaction function."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Formalization of the Hopfield Network and Dense Associative Memory",
|
| 21 |
+
"text": "The Hopfield network defines a weight matrix based on the Hebbian of the learned states , indexed by :\nThe update dynamics for a probe state are defined by the sign of the energy function, with updates being applied asynchronously across neurons:\nwhere sign is the sign function, or hardlimiting activation function:\nThe Dense Associative Memory has significantly different learning rules and update dynamics compared to the Hopfield network, as well as major architectural changes, such as using a set of memory vectors instead of a weight matrix . The Dense Associative Memory also does away with a simple energy function and instead uses the sign of the difference of energies:\nWhere is the interaction function, parameterized by interaction vertex . The interaction vertex controls how steep the interaction function is. Typical interaction functions are the polynomial\nrectified polynomial\nor leaky rectified polynomial\nThe Hopfield network behavior is recovered when using the polynomial interaction function in Equation 5 ###reference_### and \\citepKrotovHopfield2016, Demircigil2017. Increasing the interaction vertex allows memory vectors to affect only very similar probe vectors, decreasing interference with other memory vectors.\nThe Hopfield network requires only the energy calculation of the current state for updates (Equation 2 ###reference_###), while the Dense Associative Memory requires the calculation of the energy for the current state when neuron is clamped on (value ) and clamped off (value ). This is more computationally expensive but allows for updating when the interaction vertex is larger than and the usual arguments for update convergence in the Hopfield network fail \\citepHopfield1982, Hopfield1985.\nInstead of a weight matrix, the Dense Associative Memory uses a set of memory vectors, clamped to have values between and , but not necessarily corresponding to the learned states. Instead, the learned states are used to update the memory vectors in a gradient descent. The loss function used in the gradient descent is based on the update rule in Equation 4 ###reference_###:\nWhere indexes over the learned states, and indexes over the neurons. The predicted value of neuron in state , , is bounded between and by . Taking gives an error term we can differentiate to obtain a gradient to optimize with. The new parameters and control the learning process. The error exponent emphasizes of larger errors, which can help training networks with larger interaction vertices \\citepKrotovHopfield2016, KrotovHopfield2018. The inverse temperature scales the argument inside the function, allowing us to avoid the vanishing gradients of as the argument grows largely positive or largely negative. Krotov and Hopfield suggest .\nUpdating by Equation 4 ###reference_### and learning by Equation 8 ###reference_### has proven successful when all hyperparameters are tuned carefully. However, we note some issues when implementing the network according to these rules in practice, particularly relating to floating point precision. Inspecting the order of calculations in Equation 4 ###reference_### and 8 ###reference_###: first the \u201csimilarity score\u201d between a learned state and a memory vector is calculated , effectively the dot product between two binary vectors of length equal to the network dimension . Next, this similarity score is passed into the interaction function, which will typically have a polynomial-like region such as in Equation 5 ###reference_###, 6 ###reference_###, or 7 ###reference_###. If the interaction vertex is large the memory vectors become prototypes of the learned states \\citepKrotovHopfield2016, hence the similarity scores will approach the bound for the dot product of two binary vectors, . We may have to calculate a truly massive number as an intermediate value. For example, and will result in an intermediate value of . Single precision floating point numbers (\u201cfloats\u201d) have a maximum value of around , while double precision floating point numbers (\u201cdoubles\u201d) have a maximum value of around . In our example, we are already incapable of even storing the intermediate calculation in a float, and it would not require increasing the network dimension or interaction vertex considerably to break a double either. Furthermore, the precision of these data types decreases as we approach the limits, potentially leading to numerical instabilities during training or updating. Even in the update rule (Equation 4 ###reference_###) where only the sign of the result is relevant, a floating point overflow renders the calculation unusable.\nWe propose a slight modification to the implementation of the Dense Associative Memory. Normalizing the similarity score by the network dimension bounds the magnitude of the result to rather than . Additionally, we propose pulling the scaling factor inside the interaction function, so we can appropriately scale the value before any imprecision is introduced by a large exponentiation as well as controlling the gradient, making the network more robust. We show these modifications are equivalent to the original Dense Associative Memory specification in Section 4 ###reference_###. In Section 5 ###reference_### we also show by experimentation that these modifications make the network temperature independent of the interaction vertex. This makes working with the Dense Associative Memory more practical, as it avoids large hyperparameter searches when slightly altering the interaction vertex."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Modification and Consistency with Original",
|
| 27 |
+
"text": "Our modifications attempt to rectify the floating point issues by scaling the similarity scores before applying the exponentiation of the interaction function. To justify our modifications we must show that the scaling has no effect on the properties of the Dense Associative Memory in both learning and updating. For the update rule, we will show the sign of the argument to the hardlimiting function in Equation 4 ###reference_### is not affected as we introduce a scaling factor and move it within the interaction function. For learning, we will make a similar argument using Equation 8 ###reference_###."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "Homogeneity of the Interaction Function",
|
| 33 |
+
"text": "In parts of our proof on the modification of the Dense Associative Memory we require the interaction function to have a particular form. We require the sign of the difference of two functions remain constant even when a scaling factor is applied inside those functions; . A stronger property (that is much easier to prove) is that of homogeneity:\nwith the exponent known as the degree of homogeneity. Interaction functions that are not homogenous may still satisfy our modifications, but we find the proof easier with this stronger property.\nThe polynomial interaction function (Equation 5 ###reference_###) is homogenous.\nHence, the polynomial interaction function is homogenous, with degree of homogeneity equal to the interaction vertex .\n\u220e\nThe rectified polynomial interaction function (Equation 6 ###reference_###) is homogenous.\nNote that the sign of is unchanged by scaling by , so we can change the conditions on the limits as we did in Equation 11 ###reference_###. Hence, the rectified polynomial interaction function is homogenous, with degree of homogeneity equal to the interaction vertex .\n\u220e"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.1.1",
|
| 37 |
+
"parent_section_id": "4.1",
|
| 38 |
+
"section_name": "4.1.1 On Common Nonhomogenous Interaction Functions",
|
| 39 |
+
"text": "The leaky rectified polynomial interaction function (Equation 7 ###reference_###) is common in literature, alongside Equation 5 ###reference_### and 6 ###reference_###. However, the leaky rectified polynomial is not homogenous. Empirically, we find it still behaves well under our modifications.\nThe Dense Associative Memory has been generalized further using an exponential interaction function \\citepDemircigil2017. Another modification of the exponential interaction function has been used to allow for continuous states and an exponential capacity \\citepRamsauer2021. This interaction function has been analyzed in depth and linked to the attention mechanism in transformer architectures \\citepVaswani2017. For completion, we discuss our proposed modifications to the new, wildly popular interaction function:\nClearly, the exponential interaction function is not homogenous, as\nSince the constant does not have the form when pulled out of the function, the exponential function is not homogenous. However, we can analyze the exponential function specifically and relax the homogeneity constraint to show our modifications will not affect networks with exponential interaction functions. In particular, we need only show the sign of the difference between two exponentials is unaffected:\nThe behavior of the Dense Associative Memory using the exponential interaction function would be unchanged by normalizing the similarity scores before taking the exponential. This may help stabilize the continuous Dense Associative Memory and improve integrations in deep learning architectures."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.2",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "Update in the Dense Associative Memory",
|
| 45 |
+
"text": "We start with the right-hand side of Equation 4 ###reference_###, introducing an arbitrary constant . We will then show this has no effect on the sign of the result, and we are free to choose to normalize the similarity scores by the network dimension.\nThe Dense Associative Memory, equipped with a homogenous interaction function, has unchanged update dynamics (Equation 4 ###reference_###) when applying a scaling factor to similarity calculations inside the interaction function. That is:\nThe sign of any real number is unaffected by scaling factor :\nMoving within the interaction function requires constraints on the interaction function. We require that the sign of the difference remains the same. A homogenous interaction function gives us this condition, although it is slightly stronger than is required. Using the assertion that is homogenous:\nSince the scaled factor is still arbitrary, we are free to select any (positive) value we like without changing the result.\n\u220e\nTherefore, our modified network\u2019s update rule will give the same behavior as the original update rule in Equation 4 ###reference_###. Our modified update rule is given by:\nAll that is left is to choose a value for the scaling factor . As discussed, we suggest choosing , the inverse of the network dimension, such that the similarity scores are normalized between and , which nicely avoids floating point overflow. It may appear we are trading one floating point inaccuracy for another, as now our worst case would have small similarity scores (intermediate values close to ) mapped even closer to by the polynomial interaction functions, where again floating point numbers are inaccurate. However, the failure case here is to set the value to exactly rather than \u201cinfinity\u201d or \u201cNaN\u201d, and hence computation may continue albeit with reduced accuracy. Furthermore, once training has progressed slightly the memory vectors will likely be quite similar to the data, avoiding this problem. Finally, we could further tune the scaling factor if numerical instability is still a concern, as we have shown a general scaling factor is admissible. In practice, we found this was not required."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.3",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "Learning in the Dense Associative Memory",
|
| 51 |
+
"text": "Reasoning about the learning rule (Equation 8 ###reference_###) is slightly trickier than the update rule. We must ensure the network learning remains consistent rather than just the sign of the energy difference. Furthermore, there is already a scaling factor present. We will show that we can pull the existing scaling factor within the interaction function and keep its intended action of shifting the argument of the function, and hence that we can achieve the same calculation as the original network with our modifications. The argument here is largely the same as in Section 4.2 ###reference_###.\nThe Dense Associative Memory, equipped with a homogenous interaction function, has unchanged learning behavior (Equation 8 ###reference_###) when moving the scaling factor inside the interaction function evaluations, up to adjusting the scaling factor. That is:\nEquation 8 ###reference_### defines a loss function over which a gradient descent is applied to update the memory vectors . To show this gradient descent is unchanged by moving the scaling factor inside the interaction function evaluations, we focus on the predicted neuron value in Equation 8 ###reference_### and apply the same algebra as in Theorem 4.2.1 ###reference_.ThmModernHopfieldModificationTheorem1### to take the scaling factor inside the interaction function. Note that this also requires the homogeneity of the interaction function, and may alter the value the scaling factor , but will ensure the argument to the (and hence the gradient) remains the same. The exact gradient expression is eschewed here but remains unchanged from the original.\n\u220e\nTherefore, our modified learning rule has the form:\nKrotov and Hopfield suggest a value of . We suggest a modified value of . Since interaction functions of interest have a degree of homogeneity equal to the interaction vertex , shifting the scaling factor inside is effectively equivalent to taking the exponent of , so we can remove the exponent from Krotov and Hopfield\u2019s suggestion. Furthermore, as in Section 4.2 ###reference_### we suggest normalizing the similarity score by the network dimension to be bounded between and . It may seem alarming that we suggest massively lowering the similarity score in this equation, as it may affect the argument passed to the function and hence the magnitude of the gradients used in learning, but we can always simply rescale using the temperature to increase this value again if required. However, the default behavior of the network now results in rapidly shrinking intermediate values during training, rather than exploding values that are often unrecoverable. By tuning we can shift the argument to just as we could in the unmodified network while still avoiding floating point overflow."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Hyperparameter Tuning",
|
| 57 |
+
"text": "The original Dense Associative Memory suffered from very strict hyperparameter requirements. Furthermore, changing the value of the interaction vertex would significantly change the hyperparameters that would train the model well. We find that our modifications \u2014 particularly, normalizing the similarity scores in the learning rule (Equation 14 ###reference_###) \u2014 removed the dependence on the interaction vertex, meaning we can reuse the same hyperparameters for a task even as we change the interaction vertex.\nWe focus on the most important hyperparameters for learning: the initial learning rate and temperature. Other hyperparameters were tuned but did not display behavior as dramatic as we present here. We use a learning rate decay of per epoch, a momentum of , and an error exponent of . We found similar results using a decay rate of and higher values for momentum. We also found we did not require changing the error exponent , which \\citetKrotovHopfield2016 found to be useful in learning higher interaction vertices. This perhaps indicates we can remove this hyperparameter and simplify the network.\nThe network is trained on randomly generated bipolar vectors of dimension . Even for the lowest interaction vertex this task is perfectly learnable. Larger dimensions and other dataset sizes were tested with similar results. After training, we probe the network with the learned states; if the probes move only a small distance from the learned states, the network operates as an acceptable associative memory. We measure the average distance from the final, stable states to the learned states, for which a lower value is better. We repeat the experiment five times for each combination of hyperparameters. Select interaction vertices are shown below, and the full results can be found in Appendix A ###reference_###. In particular, Appendix A.3 ###reference_### shows our results for interaction vertices up to ; far above any interaction vertices documented in other literature."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.1",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "Original Network Hyperparameter Results",
|
| 63 |
+
"text": "###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### Figure 1 ###reference_### shows the hyperparameters across networks with various interaction vertices. The color of the heat map shows how far the relaxed states are from the original learned states, with lower / darker values being better. The optimal region \u2014 that is, the combination of hyperparameters that give low distances \u2014 is somewhere around and learning rate but shifts considerably with the interaction vertex. We also find a significant increase in distance with the learning rate; higher learning rates tend to degrade network performance. At even modest interaction vertices we find the optimal region is fleeting enough to not appear in our grid search. It is tempting to claim that a finer grid search may reveal the region to persist. However, closer inspection of Figure 1(d) ###reference_sf4### shows that not only has the optimal region vanished at this granularity, but the same region has increased the distance measure compared to its surroundings. Even if the optimal region exists and is very small, it is apparently surrounded by an increasingly suboptimal region. This is troublesome and makes working with the network difficult.\nNote that we have avoided floating point overflow by engineering our experiments to remain within the bounds of a double. In general, this network would fail to train for larger interaction vertices or data dimensions. However, the performance degradation seen at larger interaction vertices in Figure 1 ###reference_### is not due to floating point overflow."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5.2",
|
| 67 |
+
"parent_section_id": "5",
|
| 68 |
+
"section_name": "Modified Network Hyperparameter Results",
|
| 69 |
+
"text": "###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### Figure 3 ###reference_### shows the same hyperparameter search for our modified network. Note that we have shifted the scale of the inverse temperature as discussed in Section 4 ###reference_###. As in Figure 1 ###reference_### we find the optimal region shifts slightly for small interaction vertices () but unlike the original network we find the region stabilizes and remains substantial for large interaction vertices. Figure 4 ###reference_### shows a finer grid search over the region of interest, showing the optimal region stabilizes around . We find it is common across many network dimensions and task sizes for the inverse temperature to stabilize near , making it much easier to tune the hyperparameters of the Dense Associative Memory. The optimal region also extends to much larger initial learning rates than it does in the original network. Most notably, we find that the optimal region\u2019s position remains stable and size remains large across many values of the interaction vertex for the same network dimension."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.3",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "MNIST Classification",
|
| 75 |
+
"text": "So far, we have focused on the Dense Associative Memory as an autoassociative memory, where all neurons are updated at each step and may be updated numerous times until the state reaches stability. Another, perhaps more popular use case of the network in current literature is as a classifier. By splitting the memory vectors into two parts \u2014 a section for input data and a section for classes as logits \u2014 the network can be run as a classifier by only updating the classification neurons, and only updating those neurons once \\citepKrotovHopfield2016. Krotov and Hopfield also found it was necessary to leave the weights of the classification neurons unclamped, unlike the input data section which remained clamped between and . This is another step away from traditional autoassociative memories, but the resulting network is still worth investigating with our modifications due to its popularity. \\citetKrotovHopfield2016 show an equivalence between the Dense Associative Memory operating in this mode and a single-hidden-layer feed-forward neural network by taking the Taylor expansion of the similarity score calculation and ignoring some crosstalk terms. In doing this, the value of was also set to cancel some constants from the Taylor expansion. It is difficult to say how much of the literature is using the autoassociative memory model compared to the feed-forward equivalent, however we suspect that the feed-forward equivalent effectively implements some of our modifications (namely, normalizing the value by the network dimension) which may explain the popularity of this mode, as the network is more stable.\nIn our results below, we have trained the Dense Associative Memory on the MNIST dataset and note the validation F1 score across hyperparameter space. We have used the autoassociative memory model, rather than the feed-forward equivalent. This means we have not explicitly ignored the effects of the classification neurons on one another, as is done in constructing the feed-forward equivalent, although the effect is likely negligible. Note that we have significantly different scales for the original and modified network\u2019s values of , which is not seen in the previous results. We believe this is due to leaving some memory weights unclamped, as well as only updating a small number of neurons as required for classification, but again the optimal value of the inverse temperature stabilizes around for our modified network. Notably, our range of for the original network matches the range found by \\citepKrotovHopfield2016. In all experiments we trained the network for 500 epochs with 256 memory vectors.\n###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### Figures 5 ###reference_### and 6 ###reference_### show the optimal hyperparameter region for the original and modified network on classifying the MNIST dataset. Note that in these figures, we want a higher F1 score and hence the yellow region is better, unlike previous figures where we wanted a lower Euclidean distance and hence the purple region was better. In both Figure 5 ###reference_### and 6 ###reference_### the initially large region for shrinks slightly as grows, and appears to shrink by proportionally the same amount in both the original and modified network. The shape of the region also remains consistent in both the modified and unmodified networks. This indicates that our modifications have preserved the stability in classification based tasks. Our modifications have, however, shifted the optimal region to , meaning we have a better idea of where to search for optimal hyperparameters. While not as significant a result as in Section 5.2 ###reference_### and 5.2 ###reference_###, this result is still useful in working with the Dense Associative Memory as the location of the optimal hyperparameter region is consistent across different datasets and tasks."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "6",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Conclusion",
|
| 81 |
+
"text": "In this work, we have investigated the technical details of the Dense Associative Memory and its implementation. We note that the original network specification leads to floating point imprecision and overflow when calculating intermediate values for both update and learning. We provide details on when this imprecision occurs and show the conditions are more likely when the interaction vertex is large based on the feature-to-prototype transition of the memory vectors \\citepKrotovHopfield2016. We propose a modification to the network implementation that prevents the floating point issues. We prove our modifications do not alter the network properties, such as the capacity and autoassociative nature. Our proof relies on the interaction function being homogenous, however this property is stronger than is required, and we find empirically that some nonhomogenous functions also give well-behaved Dense Associative Memories. We then show our modified network has optimal hyperparameter regions that do not shift based on the choice of interaction vertex for purely autoassociative tasks. For classification like tasks, such as MNIST classification, our modifications do not appear to radically improve the optimal hyperparameter region but rather shift the region to a common location that makes tuning the network easier. Our modifications greatly simplify working with the Dense Associative Memory, as experiments on a dataset do not need to search across a potentially large hyperparameter space for each change in the interaction vertex. We also find several hyperparameters do not need tuning in our experiments, hinting at a potentially simpler network that is easier to tune and interpret."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [
|
| 85 |
+
{
|
| 86 |
+
"section_id": "Appendix 1",
|
| 87 |
+
"parent_section_id": null,
|
| 88 |
+
"section_name": "Appendix A Full Results of Hyperparameter Searches",
|
| 89 |
+
"text": "These results are from the original network, have dimension 100, and train on 20 learned states.\n###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34### ###figure_35### ###figure_36### These results are from our modified network, have dimension 100, and train on 20 learned states.\n###figure_37### ###figure_38### ###figure_39### ###figure_40### ###figure_41### ###figure_42### ###figure_43### ###figure_44### ###figure_45### ###figure_46### ###figure_47### ###figure_48### ###figure_49### ###figure_50### ###figure_51### ###figure_52### These results continue with the same network and setup from Appendix A.2 ###reference_### but with much larger interaction vertices than were possible with the original network. We also present only the tight grid search results, as the coarse grid search did not capture the optimal region well.\n###figure_53### ###figure_54### ###figure_55### ###figure_56### ###figure_57### ###figure_58### ###figure_59### These results are from our modified network, have dimension 250, and train on 30 learned states.\n###figure_60### ###figure_61### ###figure_62### ###figure_63### ###figure_64### ###figure_65### ###figure_66### ###figure_67### ###figure_68### ###figure_69### ###figure_70### ###figure_71###"
|
| 90 |
+
}
|
| 91 |
+
],
|
| 92 |
+
"tables": {},
|
| 93 |
+
"image_paths": {
|
| 94 |
+
"1(a)": {
|
| 95 |
+
"figure_path": "2407.08742v4_figure_1(a).png",
|
| 96 |
+
"caption": "(a) n=2\ud835\udc5b2n=2italic_n = 2\nFigure 1: Coarse hyperparameter search space for the original network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 97 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModel/Scatter-interactionVertex002.png"
|
| 98 |
+
},
|
| 99 |
+
"1(b)": {
|
| 100 |
+
"figure_path": "2407.08742v4_figure_1(b).png",
|
| 101 |
+
"caption": "(b) n=5\ud835\udc5b5n=5italic_n = 5\nFigure 1: Coarse hyperparameter search space for the original network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 102 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModel/Scatter-interactionVertex005.png"
|
| 103 |
+
},
|
| 104 |
+
"1(c)": {
|
| 105 |
+
"figure_path": "2407.08742v4_figure_1(c).png",
|
| 106 |
+
"caption": "(c) n=10\ud835\udc5b10n=10italic_n = 10\nFigure 1: Coarse hyperparameter search space for the original network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 107 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModel/Scatter-interactionVertex010.png"
|
| 108 |
+
},
|
| 109 |
+
"1(d)": {
|
| 110 |
+
"figure_path": "2407.08742v4_figure_1(d).png",
|
| 111 |
+
"caption": "(d) n=20\ud835\udc5b20n=20italic_n = 20\nFigure 1: Coarse hyperparameter search space for the original network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 112 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModel/Scatter-interactionVertex020.png"
|
| 113 |
+
},
|
| 114 |
+
"2(a)": {
|
| 115 |
+
"figure_path": "2407.08742v4_figure_2(a).png",
|
| 116 |
+
"caption": "(a) n=2\ud835\udc5b2n=2italic_n = 2\nFigure 2: Fine hyperparameter search space for the original network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 117 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModelTight/Scatter-interactionVertex002.png"
|
| 118 |
+
},
|
| 119 |
+
"2(b)": {
|
| 120 |
+
"figure_path": "2407.08742v4_figure_2(b).png",
|
| 121 |
+
"caption": "(b) n=5\ud835\udc5b5n=5italic_n = 5\nFigure 2: Fine hyperparameter search space for the original network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 122 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModelTight/Scatter-interactionVertex005.png"
|
| 123 |
+
},
|
| 124 |
+
"2(c)": {
|
| 125 |
+
"figure_path": "2407.08742v4_figure_2(c).png",
|
| 126 |
+
"caption": "(c) n=10\ud835\udc5b10n=10italic_n = 10\nFigure 2: Fine hyperparameter search space for the original network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 127 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModelTight/Scatter-interactionVertex010.png"
|
| 128 |
+
},
|
| 129 |
+
"2(d)": {
|
| 130 |
+
"figure_path": "2407.08742v4_figure_2(d).png",
|
| 131 |
+
"caption": "(d) n=20\ud835\udc5b20n=20italic_n = 20\nFigure 2: Fine hyperparameter search space for the original network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 132 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModelTight/Scatter-interactionVertex020.png"
|
| 133 |
+
},
|
| 134 |
+
"3(a)": {
|
| 135 |
+
"figure_path": "2407.08742v4_figure_3(a).png",
|
| 136 |
+
"caption": "(a) n=2\ud835\udc5b2n=2italic_n = 2\nFigure 3: Coarse hyperparameter search space for the modified network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 137 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex002.png"
|
| 138 |
+
},
|
| 139 |
+
"3(b)": {
|
| 140 |
+
"figure_path": "2407.08742v4_figure_3(b).png",
|
| 141 |
+
"caption": "(b) n=5\ud835\udc5b5n=5italic_n = 5\nFigure 3: Coarse hyperparameter search space for the modified network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 142 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex005.png"
|
| 143 |
+
},
|
| 144 |
+
"3(c)": {
|
| 145 |
+
"figure_path": "2407.08742v4_figure_3(c).png",
|
| 146 |
+
"caption": "(c) n=10\ud835\udc5b10n=10italic_n = 10\nFigure 3: Coarse hyperparameter search space for the modified network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 147 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex010.png"
|
| 148 |
+
},
|
| 149 |
+
"3(d)": {
|
| 150 |
+
"figure_path": "2407.08742v4_figure_3(d).png",
|
| 151 |
+
"caption": "(d) n=20\ud835\udc5b20n=20italic_n = 20\nFigure 3: Coarse hyperparameter search space for the modified network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 152 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex020.png"
|
| 153 |
+
},
|
| 154 |
+
"4(a)": {
|
| 155 |
+
"figure_path": "2407.08742v4_figure_4(a).png",
|
| 156 |
+
"caption": "(a) n=2\ud835\udc5b2n=2italic_n = 2\nFigure 4: Fine hyperparameter search space for the modified network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 157 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex002.png"
|
| 158 |
+
},
|
| 159 |
+
"4(b)": {
|
| 160 |
+
"figure_path": "2407.08742v4_figure_4(b).png",
|
| 161 |
+
"caption": "(b) n=5\ud835\udc5b5n=5italic_n = 5\nFigure 4: Fine hyperparameter search space for the modified network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 162 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex005.png"
|
| 163 |
+
},
|
| 164 |
+
"4(c)": {
|
| 165 |
+
"figure_path": "2407.08742v4_figure_4(c).png",
|
| 166 |
+
"caption": "(c) n=10\ud835\udc5b10n=10italic_n = 10\nFigure 4: Fine hyperparameter search space for the modified network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 167 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex010.png"
|
| 168 |
+
},
|
| 169 |
+
"4(d)": {
|
| 170 |
+
"figure_path": "2407.08742v4_figure_4(d).png",
|
| 171 |
+
"caption": "(d) n=20\ud835\udc5b20n=20italic_n = 20\nFigure 4: Fine hyperparameter search space for the modified network, measuring the Euclidean distance between learned states and relaxed states over various interaction vertices. Smaller distances correspond to better recall and hence better a better associative memory.",
|
| 172 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex020.png"
|
| 173 |
+
},
|
| 174 |
+
"5(a)": {
|
| 175 |
+
"figure_path": "2407.08742v4_figure_5(a).png",
|
| 176 |
+
"caption": "(a) n=2\ud835\udc5b2n=2italic_n = 2\nFigure 5: Hyperparameter search space for the original network, measuring the validation F1 score on the MNIST dataset. A larger F1 score corresponds to a better performing network.",
|
| 177 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/MNIST/Originaln2.png"
|
| 178 |
+
},
|
| 179 |
+
"5(b)": {
|
| 180 |
+
"figure_path": "2407.08742v4_figure_5(b).png",
|
| 181 |
+
"caption": "(b) n=5\ud835\udc5b5n=5italic_n = 5\nFigure 5: Hyperparameter search space for the original network, measuring the validation F1 score on the MNIST dataset. A larger F1 score corresponds to a better performing network.",
|
| 182 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/MNIST/Originaln5.png"
|
| 183 |
+
},
|
| 184 |
+
"5(c)": {
|
| 185 |
+
"figure_path": "2407.08742v4_figure_5(c).png",
|
| 186 |
+
"caption": "(c) n=10\ud835\udc5b10n=10italic_n = 10\nFigure 5: Hyperparameter search space for the original network, measuring the validation F1 score on the MNIST dataset. A larger F1 score corresponds to a better performing network.",
|
| 187 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/MNIST/Originaln10.png"
|
| 188 |
+
},
|
| 189 |
+
"5(d)": {
|
| 190 |
+
"figure_path": "2407.08742v4_figure_5(d).png",
|
| 191 |
+
"caption": "(d) n=20\ud835\udc5b20n=20italic_n = 20\nFigure 5: Hyperparameter search space for the original network, measuring the validation F1 score on the MNIST dataset. A larger F1 score corresponds to a better performing network.",
|
| 192 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/MNIST/Originaln20.png"
|
| 193 |
+
},
|
| 194 |
+
"6(a)": {
|
| 195 |
+
"figure_path": "2407.08742v4_figure_6(a).png",
|
| 196 |
+
"caption": "(a) n=2\ud835\udc5b2n=2italic_n = 2\nFigure 6: Hyperparameter search space for the modified network, measuring the validation F1 score on the MNIST dataset. A larger F1 score corresponds to a better performing network.",
|
| 197 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/MNIST/Modifiedn2.png"
|
| 198 |
+
},
|
| 199 |
+
"6(b)": {
|
| 200 |
+
"figure_path": "2407.08742v4_figure_6(b).png",
|
| 201 |
+
"caption": "(b) n=5\ud835\udc5b5n=5italic_n = 5\nFigure 6: Hyperparameter search space for the modified network, measuring the validation F1 score on the MNIST dataset. A larger F1 score corresponds to a better performing network.",
|
| 202 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/MNIST/Modifiedn5.png"
|
| 203 |
+
},
|
| 204 |
+
"6(c)": {
|
| 205 |
+
"figure_path": "2407.08742v4_figure_6(c).png",
|
| 206 |
+
"caption": "(c) n=10\ud835\udc5b10n=10italic_n = 10\nFigure 6: Hyperparameter search space for the modified network, measuring the validation F1 score on the MNIST dataset. A larger F1 score corresponds to a better performing network.",
|
| 207 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/MNIST/Modifiedn10.png"
|
| 208 |
+
},
|
| 209 |
+
"6(d)": {
|
| 210 |
+
"figure_path": "2407.08742v4_figure_6(d).png",
|
| 211 |
+
"caption": "(d) n=20\ud835\udc5b20n=20italic_n = 20\nFigure 6: Hyperparameter search space for the modified network, measuring the validation F1 score on the MNIST dataset. A larger F1 score corresponds to a better performing network.",
|
| 212 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/MNIST/Modifiedn20.png"
|
| 213 |
+
},
|
| 214 |
+
"7(a)": {
|
| 215 |
+
"figure_path": "2407.08742v4_figure_7(a).png",
|
| 216 |
+
"caption": "(a) Coarse search space\nFigure 7: Hyperparameter search space for n=2\ud835\udc5b2n=2italic_n = 2",
|
| 217 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModel/Scatter-interactionVertex002.png"
|
| 218 |
+
},
|
| 219 |
+
"7(b)": {
|
| 220 |
+
"figure_path": "2407.08742v4_figure_7(b).png",
|
| 221 |
+
"caption": "(b) Fine search space\nFigure 7: Hyperparameter search space for n=2\ud835\udc5b2n=2italic_n = 2",
|
| 222 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModelTight/Scatter-interactionVertex002.png"
|
| 223 |
+
},
|
| 224 |
+
"8(a)": {
|
| 225 |
+
"figure_path": "2407.08742v4_figure_8(a).png",
|
| 226 |
+
"caption": "(a) Coarse search space\nFigure 8: Hyperparameter search space for n=3\ud835\udc5b3n=3italic_n = 3",
|
| 227 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModel/Scatter-interactionVertex003.png"
|
| 228 |
+
},
|
| 229 |
+
"8(b)": {
|
| 230 |
+
"figure_path": "2407.08742v4_figure_8(b).png",
|
| 231 |
+
"caption": "(b) Fine search space\nFigure 8: Hyperparameter search space for n=3\ud835\udc5b3n=3italic_n = 3",
|
| 232 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModelTight/Scatter-interactionVertex003.png"
|
| 233 |
+
},
|
| 234 |
+
"9(a)": {
|
| 235 |
+
"figure_path": "2407.08742v4_figure_9(a).png",
|
| 236 |
+
"caption": "(a) Coarse search space\nFigure 9: Hyperparameter search space for n=5\ud835\udc5b5n=5italic_n = 5",
|
| 237 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModel/Scatter-interactionVertex005.png"
|
| 238 |
+
},
|
| 239 |
+
"9(b)": {
|
| 240 |
+
"figure_path": "2407.08742v4_figure_9(b).png",
|
| 241 |
+
"caption": "(b) Fine search space\nFigure 9: Hyperparameter search space for n=5\ud835\udc5b5n=5italic_n = 5",
|
| 242 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModelTight/Scatter-interactionVertex005.png"
|
| 243 |
+
},
|
| 244 |
+
"10(a)": {
|
| 245 |
+
"figure_path": "2407.08742v4_figure_10(a).png",
|
| 246 |
+
"caption": "(a) Coarse search space\nFigure 10: Hyperparameter search space for n=10\ud835\udc5b10n=10italic_n = 10",
|
| 247 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModel/Scatter-interactionVertex010.png"
|
| 248 |
+
},
|
| 249 |
+
"10(b)": {
|
| 250 |
+
"figure_path": "2407.08742v4_figure_10(b).png",
|
| 251 |
+
"caption": "(b) Fine search space\nFigure 10: Hyperparameter search space for n=10\ud835\udc5b10n=10italic_n = 10",
|
| 252 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModelTight/Scatter-interactionVertex010.png"
|
| 253 |
+
},
|
| 254 |
+
"11(a)": {
|
| 255 |
+
"figure_path": "2407.08742v4_figure_11(a).png",
|
| 256 |
+
"caption": "(a) Coarse search space\nFigure 11: Hyperparameter search space for n=20\ud835\udc5b20n=20italic_n = 20",
|
| 257 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModel/Scatter-interactionVertex020.png"
|
| 258 |
+
},
|
| 259 |
+
"11(b)": {
|
| 260 |
+
"figure_path": "2407.08742v4_figure_11(b).png",
|
| 261 |
+
"caption": "(b) Fine search space\nFigure 11: Hyperparameter search space for n=20\ud835\udc5b20n=20italic_n = 20",
|
| 262 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModelTight/Scatter-interactionVertex020.png"
|
| 263 |
+
},
|
| 264 |
+
"12(a)": {
|
| 265 |
+
"figure_path": "2407.08742v4_figure_12(a).png",
|
| 266 |
+
"caption": "(a) Coarse search space\nFigure 12: Hyperparameter search space for n=30\ud835\udc5b30n=30italic_n = 30",
|
| 267 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModel/Scatter-interactionVertex030.png"
|
| 268 |
+
},
|
| 269 |
+
"12(b)": {
|
| 270 |
+
"figure_path": "2407.08742v4_figure_12(b).png",
|
| 271 |
+
"caption": "(b) Fine search space\nFigure 12: Hyperparameter search space for n=30\ud835\udc5b30n=30italic_n = 30",
|
| 272 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/originalModelTight/Scatter-interactionVertex030.png"
|
| 273 |
+
},
|
| 274 |
+
"13(a)": {
|
| 275 |
+
"figure_path": "2407.08742v4_figure_13(a).png",
|
| 276 |
+
"caption": "(a) Coarse search space\nFigure 13: Hyperparameter search space for n=2\ud835\udc5b2n=2italic_n = 2",
|
| 277 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex002.png"
|
| 278 |
+
},
|
| 279 |
+
"13(b)": {
|
| 280 |
+
"figure_path": "2407.08742v4_figure_13(b).png",
|
| 281 |
+
"caption": "(b) Fine search space\nFigure 13: Hyperparameter search space for n=2\ud835\udc5b2n=2italic_n = 2",
|
| 282 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex002.png"
|
| 283 |
+
},
|
| 284 |
+
"14(a)": {
|
| 285 |
+
"figure_path": "2407.08742v4_figure_14(a).png",
|
| 286 |
+
"caption": "(a) Coarse search space\nFigure 14: Hyperparameter search space for n=3\ud835\udc5b3n=3italic_n = 3",
|
| 287 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex003.png"
|
| 288 |
+
},
|
| 289 |
+
"14(b)": {
|
| 290 |
+
"figure_path": "2407.08742v4_figure_14(b).png",
|
| 291 |
+
"caption": "(b) Fine search space\nFigure 14: Hyperparameter search space for n=3\ud835\udc5b3n=3italic_n = 3",
|
| 292 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex003.png"
|
| 293 |
+
},
|
| 294 |
+
"15(a)": {
|
| 295 |
+
"figure_path": "2407.08742v4_figure_15(a).png",
|
| 296 |
+
"caption": "(a) Coarse search space\nFigure 15: Hyperparameter search space for n=5\ud835\udc5b5n=5italic_n = 5",
|
| 297 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex005.png"
|
| 298 |
+
},
|
| 299 |
+
"15(b)": {
|
| 300 |
+
"figure_path": "2407.08742v4_figure_15(b).png",
|
| 301 |
+
"caption": "(b) Fine search space\nFigure 15: Hyperparameter search space for n=5\ud835\udc5b5n=5italic_n = 5",
|
| 302 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex005.png"
|
| 303 |
+
},
|
| 304 |
+
"16(a)": {
|
| 305 |
+
"figure_path": "2407.08742v4_figure_16(a).png",
|
| 306 |
+
"caption": "(a) Coarse search space\nFigure 16: Hyperparameter search space for n=10\ud835\udc5b10n=10italic_n = 10",
|
| 307 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex010.png"
|
| 308 |
+
},
|
| 309 |
+
"16(b)": {
|
| 310 |
+
"figure_path": "2407.08742v4_figure_16(b).png",
|
| 311 |
+
"caption": "(b) Fine search space\nFigure 16: Hyperparameter search space for n=10\ud835\udc5b10n=10italic_n = 10",
|
| 312 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex010.png"
|
| 313 |
+
},
|
| 314 |
+
"17(a)": {
|
| 315 |
+
"figure_path": "2407.08742v4_figure_17(a).png",
|
| 316 |
+
"caption": "(a) Coarse search space\nFigure 17: Hyperparameter search space for n=15\ud835\udc5b15n=15italic_n = 15",
|
| 317 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex015.png"
|
| 318 |
+
},
|
| 319 |
+
"17(b)": {
|
| 320 |
+
"figure_path": "2407.08742v4_figure_17(b).png",
|
| 321 |
+
"caption": "(b) Fine search space\nFigure 17: Hyperparameter search space for n=15\ud835\udc5b15n=15italic_n = 15",
|
| 322 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex015.png"
|
| 323 |
+
},
|
| 324 |
+
"18(a)": {
|
| 325 |
+
"figure_path": "2407.08742v4_figure_18(a).png",
|
| 326 |
+
"caption": "(a) Coarse search space\nFigure 18: Hyperparameter search space for n=20\ud835\udc5b20n=20italic_n = 20",
|
| 327 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex020.png"
|
| 328 |
+
},
|
| 329 |
+
"18(b)": {
|
| 330 |
+
"figure_path": "2407.08742v4_figure_18(b).png",
|
| 331 |
+
"caption": "(b) Fine search space\nFigure 18: Hyperparameter search space for n=20\ud835\udc5b20n=20italic_n = 20",
|
| 332 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex020.png"
|
| 333 |
+
},
|
| 334 |
+
"19(a)": {
|
| 335 |
+
"figure_path": "2407.08742v4_figure_19(a).png",
|
| 336 |
+
"caption": "(a) Coarse search space\nFigure 19: Hyperparameter search space for n=25\ud835\udc5b25n=25italic_n = 25",
|
| 337 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex025.png"
|
| 338 |
+
},
|
| 339 |
+
"19(b)": {
|
| 340 |
+
"figure_path": "2407.08742v4_figure_19(b).png",
|
| 341 |
+
"caption": "(b) Fine search space\nFigure 19: Hyperparameter search space for n=25\ud835\udc5b25n=25italic_n = 25",
|
| 342 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex025.png"
|
| 343 |
+
},
|
| 344 |
+
"20(a)": {
|
| 345 |
+
"figure_path": "2407.08742v4_figure_20(a).png",
|
| 346 |
+
"caption": "(a) Coarse search space\nFigure 20: Hyperparameter search space for n=30\ud835\udc5b30n=30italic_n = 30",
|
| 347 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex030.png"
|
| 348 |
+
},
|
| 349 |
+
"20(b)": {
|
| 350 |
+
"figure_path": "2407.08742v4_figure_20(b).png",
|
| 351 |
+
"caption": "(b) Fine search space\nFigure 20: Hyperparameter search space for n=30\ud835\udc5b30n=30italic_n = 30",
|
| 352 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModelTight/Scatter-interactionVertex030.png"
|
| 353 |
+
},
|
| 354 |
+
"21": {
|
| 355 |
+
"figure_path": "2407.08742v4_figure_21.png",
|
| 356 |
+
"caption": "Figure 21: Hyperparameter search space for n=40\ud835\udc5b40n=40italic_n = 40",
|
| 357 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex040.png"
|
| 358 |
+
},
|
| 359 |
+
"22": {
|
| 360 |
+
"figure_path": "2407.08742v4_figure_22.png",
|
| 361 |
+
"caption": "Figure 22: Hyperparameter search space for n=50\ud835\udc5b50n=50italic_n = 50",
|
| 362 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex050.png"
|
| 363 |
+
},
|
| 364 |
+
"23": {
|
| 365 |
+
"figure_path": "2407.08742v4_figure_23.png",
|
| 366 |
+
"caption": "Figure 23: Hyperparameter search space for n=60\ud835\udc5b60n=60italic_n = 60",
|
| 367 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex060.png"
|
| 368 |
+
},
|
| 369 |
+
"24": {
|
| 370 |
+
"figure_path": "2407.08742v4_figure_24.png",
|
| 371 |
+
"caption": "Figure 24: Hyperparameter search space for n=70\ud835\udc5b70n=70italic_n = 70",
|
| 372 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex070.png"
|
| 373 |
+
},
|
| 374 |
+
"25": {
|
| 375 |
+
"figure_path": "2407.08742v4_figure_25.png",
|
| 376 |
+
"caption": "Figure 25: Hyperparameter search space for n=80\ud835\udc5b80n=80italic_n = 80",
|
| 377 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex080.png"
|
| 378 |
+
},
|
| 379 |
+
"26": {
|
| 380 |
+
"figure_path": "2407.08742v4_figure_26.png",
|
| 381 |
+
"caption": "Figure 26: Hyperparameter search space for n=90\ud835\udc5b90n=90italic_n = 90",
|
| 382 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex090.png"
|
| 383 |
+
},
|
| 384 |
+
"27": {
|
| 385 |
+
"figure_path": "2407.08742v4_figure_27.png",
|
| 386 |
+
"caption": "Figure 27: Hyperparameter search space for n=100\ud835\udc5b100n=100italic_n = 100",
|
| 387 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel/Scatter-interactionVertex100.png"
|
| 388 |
+
},
|
| 389 |
+
"28(a)": {
|
| 390 |
+
"figure_path": "2407.08742v4_figure_28(a).png",
|
| 391 |
+
"caption": "(a) Coarse search space\nFigure 28: Hyperparameter search space for n=2\ud835\udc5b2n=2italic_n = 2",
|
| 392 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250/Scatter-interactionVertex002.png"
|
| 393 |
+
},
|
| 394 |
+
"28(b)": {
|
| 395 |
+
"figure_path": "2407.08742v4_figure_28(b).png",
|
| 396 |
+
"caption": "(b) Fine search space\nFigure 28: Hyperparameter search space for n=2\ud835\udc5b2n=2italic_n = 2",
|
| 397 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250Tight/Scatter-interactionVertex002.png"
|
| 398 |
+
},
|
| 399 |
+
"29(a)": {
|
| 400 |
+
"figure_path": "2407.08742v4_figure_29(a).png",
|
| 401 |
+
"caption": "(a) Coarse search space\nFigure 29: Hyperparameter search space for n=3\ud835\udc5b3n=3italic_n = 3",
|
| 402 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250/Scatter-interactionVertex003.png"
|
| 403 |
+
},
|
| 404 |
+
"29(b)": {
|
| 405 |
+
"figure_path": "2407.08742v4_figure_29(b).png",
|
| 406 |
+
"caption": "(b) Fine search space\nFigure 29: Hyperparameter search space for n=3\ud835\udc5b3n=3italic_n = 3",
|
| 407 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250Tight/Scatter-interactionVertex003.png"
|
| 408 |
+
},
|
| 409 |
+
"30(a)": {
|
| 410 |
+
"figure_path": "2407.08742v4_figure_30(a).png",
|
| 411 |
+
"caption": "(a) Coarse search space\nFigure 30: Hyperparameter search space for n=5\ud835\udc5b5n=5italic_n = 5",
|
| 412 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250/Scatter-interactionVertex005.png"
|
| 413 |
+
},
|
| 414 |
+
"30(b)": {
|
| 415 |
+
"figure_path": "2407.08742v4_figure_30(b).png",
|
| 416 |
+
"caption": "(b) Fine search space\nFigure 30: Hyperparameter search space for n=5\ud835\udc5b5n=5italic_n = 5",
|
| 417 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250Tight/Scatter-interactionVertex005.png"
|
| 418 |
+
},
|
| 419 |
+
"31(a)": {
|
| 420 |
+
"figure_path": "2407.08742v4_figure_31(a).png",
|
| 421 |
+
"caption": "(a) Coarse search space\nFigure 31: Hyperparameter search space for n=10\ud835\udc5b10n=10italic_n = 10",
|
| 422 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250/Scatter-interactionVertex010.png"
|
| 423 |
+
},
|
| 424 |
+
"31(b)": {
|
| 425 |
+
"figure_path": "2407.08742v4_figure_31(b).png",
|
| 426 |
+
"caption": "(b) Fine search space\nFigure 31: Hyperparameter search space for n=10\ud835\udc5b10n=10italic_n = 10",
|
| 427 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250Tight/Scatter-interactionVertex010.png"
|
| 428 |
+
},
|
| 429 |
+
"32(a)": {
|
| 430 |
+
"figure_path": "2407.08742v4_figure_32(a).png",
|
| 431 |
+
"caption": "(a) Coarse search space\nFigure 32: Hyperparameter search space for n=20\ud835\udc5b20n=20italic_n = 20",
|
| 432 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250/Scatter-interactionVertex020.png"
|
| 433 |
+
},
|
| 434 |
+
"32(b)": {
|
| 435 |
+
"figure_path": "2407.08742v4_figure_32(b).png",
|
| 436 |
+
"caption": "(b) Fine search space\nFigure 32: Hyperparameter search space for n=20\ud835\udc5b20n=20italic_n = 20",
|
| 437 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250Tight/Scatter-interactionVertex020.png"
|
| 438 |
+
},
|
| 439 |
+
"33(a)": {
|
| 440 |
+
"figure_path": "2407.08742v4_figure_33(a).png",
|
| 441 |
+
"caption": "(a) Coarse search space\nFigure 33: Hyperparameter search space for n=30\ud835\udc5b30n=30italic_n = 30",
|
| 442 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250/Scatter-interactionVertex030.png"
|
| 443 |
+
},
|
| 444 |
+
"33(b)": {
|
| 445 |
+
"figure_path": "2407.08742v4_figure_33(b).png",
|
| 446 |
+
"caption": "(b) Fine search space\nFigure 33: Hyperparameter search space for n=30\ud835\udc5b30n=30italic_n = 30",
|
| 447 |
+
"url": "http://arxiv.org/html/2407.08742v4/extracted/5870081/figures/normalizedModel250Tight/Scatter-interactionVertex030.png"
|
| 448 |
+
}
|
| 449 |
+
},
|
| 450 |
+
"validation": true,
|
| 451 |
+
"references": [],
|
| 452 |
+
"url": "http://arxiv.org/html/2407.08742v4"
|
| 453 |
+
}
|
20240921/2407.18957v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2407.18970v3.json
ADDED
|
@@ -0,0 +1,188 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Region Guided Attention Network for Retinal Vessel Segmentation",
|
| 3 |
+
"abstract": "Retinal imaging has emerged as a promising method of addressing this challenge, taking advantage of the unique structure of the retina. The retina is an embryonic extension of the central nervous system, providing a direct in vivo window into neurological health. Recent studies have shown that specific structural changes in retinal vessels can not only serve as early indicators of various diseases but also help to understand the progression of the disease. In this work, we present a lightweight retinal vessel segmentation network based on the encoder-decoder mechanism with region-guided attention. We introduce inverse addition attention blocks with region-guided attention to focus on the foreground regions and improve the segmentation of regions of interest. To further boost the model\u2019s performance on retinal vessel segmentation, we employ a weighted dice loss. This choice is particularly effective in addressing the class imbalance issues frequently encountered in retinal vessel segmentation tasks. Dice loss penalises false positives and false negatives equally, encouraging the model to generate more accurate segmentation with improved object boundary delineation and reduced fragmentation. Extensive experiments on a benchmark dataset show better performance (0.8285, 0.8098, 0.9677, and 0.8166 recall, precision, accuracy, and F1 score, respectively) compared to state-of-the-art methods.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "As a direct extension of the nervous system, the eye is the only part of the body where the micro-neuronal and micro-vascular systems can be viewed non-invasively, providing an accessible method for diagnosing and monitoring the effects of systemic diseases and drugs. This provides clinicians with a potential method for using the eyes to diagnose and monitor systemic disease and to assess the impact of systemic disease on eye health. Recent clinical investigations have highlighted the potential of using retinal imaging biomarkers to detect not only eye diseases (such as glaucoma and age-related macular degeneration), but also various other diseases, including hypertension, dementia, Parkinson\u2019s disease, and multiple sclerosis, in their preclinical and presymptomatic stages [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###]. In addition, it also helps diagnose conditions related to brain and heart health, which exhibit abnormal variations in the vascular structure of the retina [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]. Therefore, accurate segmentation of the retinal vessels makes it possible to build an automated diagnosis system, and segmentation of the retinal vessels has attracted the interest of researchers.\nAccurate segmentation of retinal vessels is hampered by the challenges posed by image features such as low contrast and imbalanced intensity, and anatomical features such as variations in thickness of the main vessels and capillaries. In addition, the presence of exudates and lesions in the image of the retinal fundus further complicates the segmentation task [7 ###reference_b7###, 8 ###reference_b8###, 1 ###reference_b1###, 9 ###reference_b9###]. To overcome these hurdles, researchers have employed a range of supervised or unsupervised algorithms alongside computer vision techniques, aiming for accurate and automated segmentation [10 ###reference_b10###, 11 ###reference_b11###]. Recent advances indicate that deep learning architectures outperform other methodologies in this domain [12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###]. Therefore, various deep-learning strategies have contributed to the advancement of retinal vessel segmentation.\nU-Net [15 ###reference_b15###], originally designed for medical image segmentation, exhibits a drawback in the identification of false boundaries within retinal images alongside blood vessels. Yan et al. [16 ###reference_b16###] bolstered U-Net\u2019s efficacy by implementing segment-level loss which emphasises the thickness consistency of thin vessels. Gu et al. [17 ###reference_b17###] proposed a context encoder to capture high-level features and used pre-trained ResNet blocks to improve retinal vessel segmentation. Wang et al. [18 ###reference_b18###] introduced DEU-Net, which employs a fusion module function to merge a spatial path with a large kernel. This integration preserves spatial data while effectively capturing semantic details. Dulau et al. [19 ###reference_b19###] developed a post-processing pipeline named VNR (Vessel Network Retrieval) to ensure a connected structure for retinal vessel networks, improving segmentation accuracy by removing misclassified pixels and reconnecting disconnected branches. Fu et al. [20 ###reference_b20###] used a multiscale, multilevel convolutional neural network (CNN) to obtain a dense hierarchical representation and also incorporated a conditional random field (CRF) to model extended interactions among pixels. However, despite the efficacy of these methods, they overlook the need to optimise computational efficiency to adapt the network for use in resource-limited embedded systems.\nRecently, researchers have shown an increased interest in lightweight networks for the segmentation of general objects and medical images [21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###]. SegNAS3D [31 ###reference_b31###] introduced a framework that searches for automated network architectures for 3D image segmentation, utilising a learnable directed acyclic graph representation to optimise hyperparameters and achieve superior segmentation results with reduced computational cost and smaller network architectures compared to manual approaches. IC-Net [32 ###reference_b32###] introduced an image cascade network that effectively reduces computation for real-time semantic segmentation and accelerates model convergence. Xception [33 ###reference_b33###] and MobileNet [34 ###reference_b34###] use depth-wise separable convolutions to reduce the parameter count and computational complexity, making them suitable for devices with limited computational resources. They both improve performance and efficiency in image classification and segmentation.\nIn this paper, we focus on a lightweight retinal vessel segmentation network based on the encoder-decoder mechanism with region-guided attention. Motivated by Xception and MobileNet, we implement depth-wise separable convolutions in the encoder and decoder blocks to minimise computational complexity and enhance model efficiency. In addition to the depth-wise convolutions, we use a reduced number of filters in both encoder and decoder blocks to increase the robustness of the model. These features make the model suitable for devices with limited computational and memory resources. We use weighted dice loss to enhance model performance on retinal vessel segmentation, as it efficiently handles class imbalance issues commonly encountered in retinal vessel segmentation tasks. By penalising false positives and false negatives equally, Dice loss encourages the model to produce more accurate segmentation with improved delineation of object boundaries and reduced fragmentation. In addition, we introduce Inverse addition Attention (IAA) blocks to focus on the foreground regions and improve the segmentation of the region of interest (ROI). The IAA blocks dramatically improve model performance. The main contributions of this work are:\nWe present a lightweight region-guided segmentation network with only 40K parameters that can be deployed on devices with limited computational and memory resources.\nWe introduce region-guided inverse addition attention blocks along with weighted dice loss specifically crafted for retinal vessel segmentation that explicitly focuses on foreground regions, resulting in better segmentation of the ROI.\nTo refine the initial segmentation maps to obtain the refined segmentation, we propose a partial decoder and use multiple attention blocks that align the high- and low-level features.\nWe have performed extensive experiments and identified the best hyperparameters for the segmentation of retinal vessels on benchmark datasets."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": "Retinal vessel segmentation is a crucial task in ophthalmic image analysis, enabling the detection and monitoring of various eye diseases such as diabetic retinopathy, glaucoma, and hypertensive retinopathy. Over the years, numerous approaches have been proposed that use both traditional image processing techniques and advanced machine learning algorithms. Earlier methods for segmentation of the retinal vessels, that relied primarily on traditional image processing, typically involved several steps including pre-processing, vessel enhancement, segmentation, and post-processing. To improve segmentation performance, some researchers used preprocessing techniques to enhance the image quality. However, advanced machine learning-based techniques can automatically learn features from large datasets, often outperforming traditional approaches.\nCNN-based techniques have exhibited promising performance in segmentation tasks and therefore have gained popularity. Uysal et al.[35 ###reference_b35###] proposed a CNN architecture in an end-to-end learning framework for medical image segmentation. They used data augmentation to enlarge the dataset synthetically for better performance, yet their work faces the challenge of dependency on the dataset and the model overfits on a majority of medical image datasets due to large model capacity and small dataset size. Oliveira et al.[36 ###reference_b36###] used a fully convolutional neural network (FCN) for the task of segmenting retinal vessels from fundus images. FCNs are particularly well suited for this task because of their ability to produce pixel-wise predictions, which is crucial to accurately delineating the thin and intricate structures of retinal vessels. Yan et al.[37 ###reference_b37###], proposed a three-stage FCN architecture to progressively refine predictions through multiple stages, each stage building on the output of the previous one. In the first stage, the model gives a coarse prediction, in the second stage the coarse predictions are refined by focusing on medium-level features and improving the resolution, while in the final stage, the model achieves high precision in segmentation by correcting small errors and adding more fine details. The three-stage FCN network proved to be very effective for the delineation and segmentation of retinal vessels. Guo et al. proposed the BTS-DSN model [38 ###reference_b38###], which incorporates auxiliary supervision signals at multiple intermediate layers. This approach aids in facilitating gradient flow during the training process, leading to more stable convergence and improved overall segmentation performance. The network architecture includes short connections, akin to those found in residual networks. These connections are crucial for effectively propagating information across different layers, thus improving feature extraction and improving segmentation accuracy. The authors use ResNet-101 as the backbone of the BTS-DSN model, providing it with substantial capacity. However, this choice also renders the model computationally intensive and resource-demanding, which may be a consideration for practical deployment. Arsalan et al.[39 ###reference_b39###] proposed an AI-based semantic segmentation architecture tailored for the analysis of retinal images, which leverages a deep learning framework with multiple CNN layers to achieve high precision in identifying and segmenting retinal regions affected by diabetic and hypertensive retinopathy.\nU-Net[15 ###reference_b15###] achieved ground-breaking results in image segmentation tasks and since then researchers have come up with numerous variations of the popular encoder-decoder-based architecture and have improved segmentation accuracy in general object segmentation tasks, as well as on medical image segmentation. Oktay et al.[40 ###reference_b40###] added attention gates to the standard U-Net to achieve better model sensitivity and segmentation accuracy. The authors combined the attention mechanism with the U-Net architecture for the task of segmenting multiclass medical images and obtained promising results while keeping low computational complexity. Jin et al.[41 ###reference_b41###] introduced the DUNet architecture, which integrates deformable convolutional networks into the U-Net framework. The architecture was designed to capture more complex vessel structures and improve segmentation accuracy by adapting the receptive fields to the shape of the retinal vessels. Traditional convolutional layers have fixed geometric structures, which can be limiting when dealing with irregular shapes of retinal vessels, while deformable convolutions address this by allowing the network to learn offsets for the convolutional kernels, enabling adaptive and flexible receptive fields that can better capture the variability in vessel shapes and sizes. Reza et al.[42 ###reference_b42###] introduced the use of Bidirectional Convolutional Long-Short-Term Memory (Bi-ConvLSTM) layers within the U-Net architecture. ConvLSTM layers are designed to handle spatial and temporal dependencies in data, making them suitable for tasks that require contextual understanding over sequences or spatially dependent structures. By using Bi-ConvLSTM, the model can capture dependencies in both forward and backward directions, enhancing its ability to model complex spatial relationships.\nWei et al.[43 ###reference_b43###] introduced Genetic U-Net, a framework that leverages genetic algorithms for the automatic design of deep neural networks specifically tailored for the segmentation of the retinal vessels. This approach aims to optimise the network architecture without extensive manual intervention. The paper applies Neural Architecture Search (NAS) using genetic algorithms to explore and identify optimal network structures. This method systematically evolves network architectures to enhance performance, demonstrating a sophisticated use of NAS in medical imaging. Although the genetic algorithm optimises the network architecture, scaling this approach to very large datasets or real-time applications might be challenging due to the inherent computational demands and the iterative nature of the search process.\n###figure_1###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Light-weight Models for Medical Image Segmentation",
|
| 21 |
+
"text": "After the success of light-weight models such as Mobile-Net [44 ###reference_b44###] on general object segmentation, researchers have recently been interested in developing light-weight networks for the segmentation of medical images. They have tried to minimise the size and capacity of the network, decrease the overall number of computations performed, and reduce the memory occupied by the model. Iqbal et al.[27 ###reference_b27###] proposed a lightweight, compact, and efficient network called LDMRes-Net based on dual multiscale residual blocks that incorporate a multiscale feature extraction mechanism, allowing it to capture details at various levels of granularity. They reduced the number of parameters and computational complexity compared to traditional deep learning networks. The efficiency of the LDMRes-Net is enhanced by the implementation of depth-wise separable convolutions while the residual connections in the network maintain their performance. Tariq et al. [24 ###reference_b24###] have proposed a lightweight network for medical image segmentation. It focuses on capturing high-frequency features necessary for medical image segmentation tasks, and the implementation of expand-and-squeeze blocks makes their model robust and computationally efficient. The authors have attempted to provide a solution for applications on devices with limited computational resources. Li et al.[45 ###reference_b45###] utilised a lightweight version of U-Net that is designed to be computationally efficient while maintaining high accuracy in the segmentation of lesions in ultrasound images. There is not much work done on the utilisation and introduction of lightweight models for the segmentation of retinal vessels. In this paper, we focus on building a lightweight model for the segmentation of retinal vessels while maintaining state-of-the-art segmentation performance."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Region Guided Attention Network",
|
| 27 |
+
"text": "In this work, we propose a region-guided attention network that uses the strengths of U-Net[15 ###reference_b15###] and a region-guided attention mechanism for the segmentation of medical images. As a base model, we first modified the U-Net architecture to its lightweight version. To do so, we minimise the number of learnable parameters by reducing the number of layers and the number of filters in each layer. The motivation behind this step was to deal with the overfitting issue that occurs mainly due to the large model capacity and the small size of the medical images dataset that specifically deals with connected vessels. This choice of parameters makes the model computationally efficient while maintaining satisfactory segmentation performance and deals with weak anti-noise interference ability specifically for capillary vessels.\nTo boost the segmentation performance even further, we introduce an Inverse Addition Attention mechanism that forces the model to focus on the ROIs that are most relevant to the segmentation task. As shown in Figure 1 ###reference_###, the initial segmentation map is generated by the partial decoder, and through multiple attention blocks, we refine the initial segmentation maps until we obtain the final refined segmentation map."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.1",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Encoder Block",
|
| 33 |
+
"text": "The encoder block in the proposed model comprises two convolutional layers, each followed by a batch normalisation layer and a ReLU activation function, and is concluded with a non-overlapping max pooling operation. The primary goal of this block is to capture and refine relevant features before passing them on to subsequent blocks. To enhance computational efficiency and avoid redundant operations, we employ depthwise separable convolutions. This approach not only accelerates model training and inference, which is critical for real-time applications but also significantly reduces the number of parameters, resulting in lower memory usage. Depthwise convolutions process each channel independently, allowing efficient spatial feature extraction without excessive parameter overhead. To achieve a balanced integration of spatial and channel-wise features, we combine depth-wise convolutions with point-wise () convolutions. This combination enables the model to effectively capture detailed and relevant features, leading to improved representation learning and overall model performance. In addition, the use of batch normalisation helps stabilise and accelerate the training process by normalising the activations of each layer. This reduces internal covariate shift and enables the use of higher learning rates, further speeding up convergence. The ReLU activation function introduces nonlinearity into the model, allowing it to learn complex patterns and interactions within the data. The nonoverlapping max pooling operation reduces the spatial dimensions of the feature maps, thereby decreasing the computational load while preserving essential spatial information.\nOverall, the design of the encoder block, with its separable convolutions in-depth, batch normalisation, ReLU activations, and max pooling, ensures efficient and effective feature extraction. This combination enhances the model\u2019s ability to learn rich representations from the input data, contributing to improved performance in downstream tasks."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.2",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Decoder Block",
|
| 39 |
+
"text": "A decoder block in the proposed network consists of a deconvolution operation that upsamples the input feature maps, followed by two depth-wise separable convolution operations similar to those in the encoder blocks. Each decoder block receives input from either the previous decoder block or the bottleneck layer, in addition to features from a corresponding encoder block via a skip connection. The deconvolution operation on the decoder up-samples the feature maps, aligning their spatial dimensions with those of the encoder features. This upsampled feature map is then concatenated with the feature map from the encoder block, preserving fine-grained spatial information critical for precise segmentation. After concatenation, the combined features undergo processing through a series of convolutional layers followed by ReLU activation. These layers refine the upsampled features, enhancing the network\u2019s ability to accurately segment the image. The output of this process is then passed on to the next decoder block, continuing the upsampling and refinement process until the original image resolution is restored. By incorporating skip connections and refining the features through convolutional layers, the decoder blocks effectively reconstruct the segmented image, maintaining spatial accuracy and detail. We also used a cascaded partial decoder to align the high- and low-level features that are then passed to the main decoders (see Figure 1 ###reference_### partial decoder block). This addition positively impacts the segmentation result and, as shown in Table 2 ###reference_###, represented by PD (Partial Decoder)."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.3",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Inverse Addition Attention Block",
|
| 45 |
+
"text": "We implement an Inverse Addition Attention (IAA) block to enhance the model\u2019s focus on foreground pixels, specifically those containing vessels. To achieve optimal results, we incorporate an IAA block for each decoder block in the network. Each IAA block receives feature maps from the corresponding decoder block and the segmentation map from the preceding IAA block. The exception is the first IAA block, which receives its initial input from the partial decoder block. Within each IAA block, we replicate the segmentation map, apply a sigmoid activation to it, and then compute its inverse. This inverted map is multiplied element-wise with the feature maps from the decoder block, effectively emphasising the vessel regions by suppressing the background. Following this, the actual segmentation map is added to the resulting feature map, integrating the refined attention information. The combined output is then passed to the next IAA block, continuing the process. This structured approach ensures that the model progressively refines its attention on the vessels through each stage of the decoding process, significantly improving segmentation accuracy by retaining critical spatial details and enhancing feature representation."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.4",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Proposed Network Architecture",
|
| 51 |
+
"text": "To explain how the proposed network processes the input to obtain the desired segmentation map, some components of the model are first discussed. Let the model input be defined as , where , and be defined as a separable convolution operation in depth, where is the kernel size, is batch normalisation and be a non-overlapping max pooling operation of size . Let be the ReLU activation and be the convolutional block given in Equation 1 ###reference_###. Then the first encoder block processes the input and returns two values and as in equations Eq.(2 ###reference_###-3 ###reference_###).\nHere and , where , and are the height, width, and channel of the feature maps respectively. Now we feed to the second encoder block and get and by repeating equations Eq.(1 ###reference_###-3 ###reference_###). We repeat the same sequence of steps for the third encoder block passing as the input of the layer and get and as the outputs. By now we are done with three encoder blocks and now we feed to the bottleneck layer where Eq.1 ###reference_### is performed and we get the feature map which will be fed to the decoder blocks. At this point, the feature map obtained from the encoder blocks, , is upsampled in the first decoder block by a deconvolution operation where is the kernel size. After the upsampling operation, the resultant feature map is concatenated with and we pass it through the convolution block Eq.1 ###reference_###. The resultant features of the first encoder block,, are calculated in Eq.4 ###reference_###.\nThe second decoder block operates on and as inputs and produces by repeating Eq.4 ###reference_### and likewise, the third decoder block takes and as input and produces using Eq.(4 ###reference_###). We then use a partial decoder [46 ###reference_b46###] to aggregate high-level features that have smaller spatial resolution compared to low-level features. For this purpose, we pass through the convolution block (Eq.1 ###reference_###) and get , as shown in Eq.5 ###reference_###.\nIn order to refine the obtained features for more accurate segmentation, we embed an attention mechanism, namely IAA. The first attention block operates on the and to generate the first predicted segmentation mask , given in Eq.6 ###reference_###, which is later refined twice till we get the final segmentation mask.\nOnce we have obtained , we refine it by passing it twice through the inverse addition block along and sequentially. is obtained by passing and to Eq.7 ###reference_### and is obtained by processing and through Eq.7 ###reference_###.\nWe run through a convolutional layer followed by a sigmoid activation function to obtain the final predicted segmentation mask as given in Eq.8 ###reference_###."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Results and Discussion",
|
| 57 |
+
"text": "In this section, we first describe the dataset, followed by implementation details, ablation results, and comparative analysis."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Dataset",
|
| 63 |
+
"text": "We used DRIVE, CHASE_DB1, and STARE datasets for the experiments. DRIVE contains a total of 40 images, 20 of which are used for training and 20 for testing. However, the dataset size is not suitable for deep learning purposes; therefore, we augmented the dataset. The enhancement included resizing all images from to and the application of horizontal and vertical flips along with degrees of rotation in the training images and saving the image after every degree of rotation. As a result of the augmentation, the training set size increased to from images. Note that the test images were only resized and no other pre-processing or post-processing was performed on them. Likewise, we applied the same augmentations on the CHASE_DB1 dataset where the 20 training images were resized and augmented while the 8 test images were resized. For the STARE dataset, we used the first images for training on which we applied horizontal and vertical flips, degrees rotation, and again horizontal and vertical flips on the rotated images. The next images were used for testing and only resized. The details of the datasets are given in Table 1 ###reference_###."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Implementation Details",
|
| 69 |
+
"text": "All experiments were carried out on a high-performance GeForce RTX 3090 GPU, training the model for a total of 70 epochs. Initially, we attempted training for 100 epochs; however, through iterative experimentation, we observed that optimal results were consistently achieved around the 58th epoch. Therefore, we adjusted the training protocol to end at 70 epochs for subsequent experiments. The training regimen began with a learning rate of , incorporating a learning rate decay strategy throughout the training process. To optimise the model parameters, we employed the Adam optimiser with a momentum setting of . Additionally, we implemented a learning rate reduction strategy based on plateau detection, with patience of 5 steps before triggering a reduction. In exploring various objective functions, we experimented with several commonly used loss functions, including binary cross-entropy (BCE), intersection over union (IoU), a combination of BCE and IoU, Dice loss, and combinations of Dice, BCE and IoU. Through rigorous evaluation, we determined that the weighted Dice loss consistently yielded the best results for the segmentation tasks. The selection of Dice loss as the optimal objective function is attributed to its effectiveness in addressing class imbalance, particularly prevalent in segmentation tasks such as segmentation of the retinal vessels. Its ability to provide a balanced measure of segmentation accuracy makes it suitable for a wide range of segmentation applications, ensuring robust performance across diverse datasets and tasks.\n###figure_2### ###figure_3###"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.3",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Ablation Study",
|
| 75 |
+
"text": "We conducted a comprehensive series of experiments to explore various configurations, including different loss functions and filter counts within the encoder and decoder blocks. In particular, the investigations revealed that the use of fewer filters than the standard UNET configuration yielded optimal results, as outlined in Table 2 ###reference_###. Our meticulous analyses highlighted the superiority of the Dice loss function for retinal vessel segmentation. This loss function effectively addresses class imbalance, leading to improved segmentation accuracy. Furthermore, the integration of a region-guided attention mechanism significantly improved network segmentation performance. Specifically, as shown in Table 2 ###reference_###, the incorporation of the IAA block increased segmentation performance by approximately . To effectively integrate global and local contexts, we used a cascaded partial decoder [46 ###reference_b46###]. The global context captures the overall structure of the object, while the local context adds fine-grained details. This dual-context integration ensures a comprehensive and detailed representation of the object. The integration of PD resulted in better connected vessels and hence improved segmentation results. Finally, reducing the number of filters is particularly advantageous, as it mitigates the risk of overfitting by limiting the capacity of the model. This approach is especially beneficial for medical image datasets, which are often small in size and pose inherent challenges.\nIn general, the results demonstrate that carefully selecting loss functions and components of the network architecture, such as filter counts and attention mechanisms, can substantially improve segmentation performance. These findings provide valuable information for the development of more effective models for medical image analysis."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.4",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Comparison with Existing Works",
|
| 81 |
+
"text": "We conducted a comprehensive evaluation of our proposed model using the DRIVE [47 ###reference_b47###], STARE [56 ###reference_b56###], and CHASEDB1 [48 ###reference_b48###] datasets, comparing its segmentation performance with existing methods. In Table 3 ###reference_### a detailed comparison is presented that shows the efficacy of the proposed model against the latest approaches on the DRIVE dataset. It is worth noting that the majority of existing work shows a remarkable gap between sensitivity and specificity, which is due to the class imbalance in terms of background and foreground pixels. The higher specificity of the models is due to the excess of background pixels in the image. However, our proposed model achieves greater sensitivity and specificity, which shows that our model segments the foreground and the background pixels better than the existing work. In addition, our model achieves the highest accuracy among existing works, showing that we capture the most retinal vessels accurately. In Tables4 ###reference_### and Table 5 ###reference_###, detailed comparisons of the proposed model with the state-of-the-art approaches on the CHASEDB1 and STARE datasets are presented, respectively. The results of the experiments demonstrate the notable strength and superiority of the proposed model, both in terms of segmentation performance and computational efficiency. Despite its compact size, with only million learnable parameters, our proposed model outperforms existing models in terms of segmentation accuracy. This underscores the efficiency and applicability of our model on devices with limited memory and computational resources while still achieving superior segmentation results compared to larger models. The proposed method achieves results superior to those of the state-of-the-art CHASEDB1 and STARE datasets. The region-guided attention block pushes our model to distinguish vessels from the background, and the low number of network parameters helps it avoid the overfitting curse commonly encountered when working with medical image datasets. Hence, our model competitively beats the state-of-the-art techniques on DRIVE, CHASE, and STARE datasets."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.5",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "Qualitative Results",
|
| 87 |
+
"text": "We present some of the qualitative results obtained by our proposed method. Figure 2 ###reference_### provides a qualitative analysis of our model performance on sample query images from the DRIVE dataset. Furthermore, Figure 2 ###reference_### illustrates that our model accurately captures both thick and thin retinal vessels, further confirming its effectiveness in accurately segmenting the structures of the retinal vessels.\nWe showcase the qualitative performance of the proposed method on the CHASEDB1 dataset [48 ###reference_b48###] in Figure 3 ###reference_###. We distinguish the correctly labeled pixels, false positives, and false negatives in the last column of the figure. The qualitative results, too, approve the efficiency of the proposed model in segmenting retinal vessels."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Conclusion and Future Work",
|
| 93 |
+
"text": "In this paper, we introduce a lightweight segmentation network with a significantly lower number of parameters (0.04 million), which is composed of the encoder-decoder mechanism along with partial decoder and inverse addition attention blocks for region-guided segmentation tailored specifically for retinal vessels, complemented by an in-depth ablation study focused on hyperparameter optimisation. The region-guided attention block focuses on the foreground push, whereas the cascaded partial decoder aligns the high and low level features, and hence improves the performance of the model by . This comprehensive study provides researchers with valuable information, offering a solid foundation to enhance retinal vessel segmentation without the need for extensive hyperparameter optimisation. Thus, streamlining future research efforts in this domain."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [],
|
| 97 |
+
"tables": {
|
| 98 |
+
"1": {
|
| 99 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.5.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S4.T1.6.2\" style=\"font-size:90%;\">Datasets used in the study.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.4.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.3.4.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.4.1.1.1\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.3.4.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.4.1.2.1\">Image Resolution</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.3.4.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.4.1.3.1\">Total</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.3.4.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.4.1.4.1\">Training/Test Split</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.1.2\">DRIVE</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.1.1\">584565</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.1.3\">40</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.1.4\">Train: 20, Test: 20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.2.2\">CHASEDB1</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.2.1\">999960</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.2.3\">28</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.2.4\">Train: 20, Test: 8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.3.3.2\">STARE</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.3.3.1\">605 700</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.3.3.3\">20</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.3.3.4\">Train: 10, Test: 10</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 100 |
+
"capture": "Table 1: Datasets used in the study."
|
| 101 |
+
},
|
| 102 |
+
"2": {
|
| 103 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S4.T2.4.2\" style=\"font-size:90%;\">Ablation study on different loss functions with basic UNet, UNet plus region guided IAA Block and with integration of cascaded partial decoder. We get the best results using UNet with an IAA block, a cascaded partial decoder, and fewer filters in the encoder and decoder blocks. *with filters (8,16,24,32)</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T2.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.3.1\">Loss Functions</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.4.1\">Jaccard</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.5.1\">Recall</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.6.1\">Precision</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.7.1\">Acc</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.8.1\">p-value</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.1.2.1.1\">UNET</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.2.1.2\">IoU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.2.1.3\">0.6602</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.2.1.4\">0.7951</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.2.1.5\">0.7876</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.2.1.6\">0.8081</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.2.1.7\">0.9648</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.2.1.8\">0.493</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.3.2.1\">UNET</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.2\">Dice</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.3\">0.6621</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.4\">0.7965</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.5\">0.8042</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.6\">0.7936</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.7\">0.9643</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.3.2.8\">0.490</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.4.3.1\">UNET</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.3.2\">Dice + BCE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.3.3\">0.6584</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.3.4\">0.7938</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.3.5\">0.7850</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.3.6\">0.8080</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.3.7\">0.9646</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.3.8\">0.481</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.5.4.1\">IAA + UNET</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.2\">IoU</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.3\">0.6569</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.4\">0.7926</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.5\">0.7766</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.6\">0.8160</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.7\">0.9648</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.5.4.8\">0.484</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.6.5.1\">IAA + UNET</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.2\">Dice</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.3\">0.6725</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.4\">0.8039</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.5\">0.8055</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.6\">0.8094</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.7\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.6.5.7.1\">0.9659</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.5.8\">0.521</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.7.6.1\">IAA + UNET</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.2\">Dice + BCE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.3\">0.6794</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.4\">0.8087</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.5\">0.8046</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.6\">0.8205</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.7\">0.9671</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.7.6.8\">0.512</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.8.7.1\">IAA + PD + UNET</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.2\">Dice + BCE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.3\">0.6805</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.4\">0.8096</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.5\">0.8106</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.6\">0.8144</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.7\">0.9666</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.8.7.8\">0.514</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T2.1.9.8.1\">IAA + PD + UNET *</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.9.8.2\">Dice + BCE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.9.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.9.8.3.1\">0.6903</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.9.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.9.8.4.1\">0.8166</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.9.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.9.8.5.1\">0.8285</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.9.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.9.8.6.1\">0.8098</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.9.8.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.9.8.7.1\">0.9677</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.9.8.8\">0.512</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 104 |
+
"capture": "Table 2: Ablation study on different loss functions with basic UNet, UNet plus region guided IAA Block and with integration of cascaded partial decoder. We get the best results using UNet with an IAA block, a cascaded partial decoder, and fewer filters in the encoder and decoder blocks. *with filters (8,16,24,32)"
|
| 105 |
+
},
|
| 106 |
+
"3": {
|
| 107 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T3.5.2.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S4.T3.2.1\" style=\"font-size:90%;\">Comparison of the proposed method with other existing works on the DRIVE <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib47\" title=\"\">47</a>]</cite> dataset. The best results are in bold, and dashes indicate unknown results. Some works do not compute the Score for their network performance, hence there is \u2019-\u2019 in the table.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T3.3.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.2.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.3.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.3.1\">Sensitivity</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.3.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.4.1\">Specificity</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.3.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.5.1\">Acc</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.3.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.3.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.6.1\">Params (M)</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T3.3.2.1.1\">VessNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib39\" title=\"\">39</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.2.1.2\">0.8022</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.2.1.3\">0.9810</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.2.1.4\">0.9655</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.2.1.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.2.1.6\">9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.3.3.2.1\">ERFNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib49\" title=\"\">49</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.2.2\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.2.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.2.4\">0.9598</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.2.5\">0.7652</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.2.6\">2.06</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.3.4.3.1\">UNet++ <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib50\" title=\"\">50</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.4.3.2\">0.8031</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.4.3.3\">0.9820</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.4.3.4\">0.9533</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.4.3.5\">0.8111</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.4.3.6\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.3.5.4.1\">Three-Stage FCN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib37\" title=\"\">37</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.5.4.2\">0.7631</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.5.4.3\">0.9820</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.5.4.4\">0.9538</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.5.4.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.5.4.6\">20.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.3.6.5.1\">FCN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib36\" title=\"\">36</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.6.5.2\">0.8039</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.6.5.3\">0.9804</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.6.5.4\">0.9576</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.6.5.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.6.5.6\">0.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.3.7.6.1\">M2U-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib51\" title=\"\">51</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.7.6.2\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.7.6.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.7.6.4\">0.9630</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.7.6.5\">0.8091</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.7.6.6\">0.55</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.3.8.7.1\">Vessel-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib52\" title=\"\">52</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.8.7.2\">0.8038</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.8.7.3\">0.9802</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.8.7.4\">0.9578</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.8.7.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.8.7.6\">1.70</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.3.9.8.1\">MobileNet-V3 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib34\" title=\"\">34</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.9.8.2\">0.8250</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.9.8.3\">0.9771</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.9.8.4\">0.9371</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.9.8.5\">0.6575</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.9.8.6\">2.50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.10.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.3.10.9.1\">DUNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib41\" title=\"\">41</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.10.9.2\">0.7963</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.10.9.3\">0.9800</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.10.9.4\">0.9566</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.10.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.10.9.5.1\">0.8203</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.10.9.6\">0.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.11.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T3.3.11.10.1\">MS-NFN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib53\" title=\"\">53</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.11.10.2\">0.7844</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.11.10.3\">0.9819</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.11.10.4\">0.9567</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.11.10.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.11.10.6\">0.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.12.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T3.3.12.11.1\">Proposed Method</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.3.12.11.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.12.11.2.1\">0.8285</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.3.12.11.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.12.11.3.1\">0.9822</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.3.12.11.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.12.11.4.1\">0.9677</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.3.12.11.5\">0.8166</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T3.3.12.11.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.12.11.6.1\">0.04</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 108 |
+
"capture": "Table 3: Comparison of the proposed method with other existing works on the DRIVE [47] dataset. The best results are in bold, and dashes indicate unknown results. Some works do not compute the Score for their network performance, hence there is \u2019-\u2019 in the table."
|
| 109 |
+
},
|
| 110 |
+
"4": {
|
| 111 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S4.T4.3.2\" style=\"font-size:90%;\">Performance comparison between the proposed method and some alternative methods on the CHASEDB1 <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib48\" title=\"\">48</a>]</cite> dataset.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.4.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S4.T4.4.1.1.1\" rowspan=\"2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.1.1.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"4\" id=\"S4.T4.4.1.1.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.1.2.1\">Performance Measures in (%)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T4.4.1.1.3\" rowspan=\"2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.4.1.1.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.1.1.3.1.1\">Params (M)</span></span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.4.2.2.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.2.2.1.1\">Se.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.4.2.2.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.2.2.2.1\">Sp.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.4.2.2.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.2.2.3.1\">Acc</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.4.2.2.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.2.2.4.1\">F1</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.4.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T4.4.3.1.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">Att UNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib40\" title=\"\">40</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.3.1.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">80.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.3.1.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.3.1.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.3.1.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">80.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.3.1.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">6.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.4.4.2.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">SegNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib54\" title=\"\">54</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.2.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">78.93</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.2.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">97.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.2.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.2.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">79.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.2.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">28.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.4.5.3.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">Wave-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib55\" title=\"\">55</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.5.3.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">82.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.5.3.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.5.3.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.5.3.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">83.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.5.3.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">1.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.4.6.4.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">BTS-DSN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib38\" title=\"\">38</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.6.4.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">78.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.6.4.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.6.4.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.27</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.6.4.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">79.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.6.4.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">7.80</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.4.7.5.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">DUNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib41\" title=\"\">41</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.7.5.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">77.35</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.7.5.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.7.5.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.7.5.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">78.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.7.5.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">0.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.4.8.6.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">G-Net Light <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib5\" title=\"\">5</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.8.6.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">82.10</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.8.6.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.8.6.3.1\">98.38</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.8.6.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">97.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.8.6.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">80.48</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.8.6.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">0.39</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T4.4.9.7.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">BCD-Unet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib42\" title=\"\">42</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.9.7.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">79.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.9.7.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.06</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.9.7.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.9.7.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">80.22</td>\n<td class=\"ltx_td\" id=\"S4.T4.4.9.7.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.10.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S4.T4.4.10.8.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.10.8.1.1\">Proposed Method</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T4.4.10.8.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.10.8.2.1\">82.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T4.4.10.8.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T4.4.10.8.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.10.8.4.1\">97.41</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T4.4.10.8.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.10.8.5.1\">84.59</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T4.4.10.8.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">0.04</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 112 |
+
"capture": "Table 4: Performance comparison between the proposed method and some alternative methods on the CHASEDB1 [48] dataset."
|
| 113 |
+
},
|
| 114 |
+
"5": {
|
| 115 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T5.2.1.1\" style=\"font-size:90%;\">Table 5</span>: </span><span class=\"ltx_text\" id=\"S4.T5.3.2\" style=\"font-size:90%;\">Performance comparison of the proposed method with a number of alternatives on the STARE <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib56\" title=\"\">56</a>]</cite> dataset.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T5.4.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T5.4.1.1.1\" rowspan=\"2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.1.1.1.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"4\" id=\"S4.T5.4.1.1.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.1.1.2.1\">Performance Measures in (%)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.4.1.1.3\" rowspan=\"2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.1.1.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.1.1.3.1.1\">Params (M)</span></span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.4.2.2.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.2.2.1.1\">Se.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.4.2.2.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.2.2.2.1\">Sp.</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.4.2.2.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.2.2.3.1\">Acc</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.4.2.2.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.2.2.4.1\">F1</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.4.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T5.4.3.1.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">Three-stage FCN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib57\" title=\"\">57</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.3.1.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">77.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.3.1.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.57</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.3.1.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.3.1.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.3.1.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">20.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.4.4.2.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">BTS-DSN <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib38\" title=\"\">38</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.4.2.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">82.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.4.2.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.28</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.4.2.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.4.2.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">83.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.4.2.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">7.80</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.4.5.3.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">DUNet <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib41\" title=\"\">41</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.5.3.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">78.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.5.3.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.16</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.5.3.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.5.3.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">81.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.5.3.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">0.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.4.6.4.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">OCE-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib58\" title=\"\">58</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.6.4.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">80.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.6.4.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.65</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.6.4.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.72</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.6.4.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">83.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.6.4.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">6.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.4.7.5.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">Wave-Net <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib55\" title=\"\">55</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.7.5.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">79.02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.7.5.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.7.5.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">96.41</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.7.5.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">81.40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.7.5.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">1.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.4.8.6.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">G-Net Light <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib5\" title=\"\">5</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.8.6.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">81.70</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.8.6.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.8.6.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">97.30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.8.6.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">81.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.8.6.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">0.39</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T5.4.9.7.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">LDMRes-Net<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2407.18970v3#bib.bib27\" title=\"\">27</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.9.7.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">84.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.9.7.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.9.7.3.1\">98.75</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.9.7.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">97.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.9.7.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">84.24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.9.7.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">0.072</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.10.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S4.T5.4.10.8.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">Proposed Method</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T5.4.10.8.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.10.8.2.1\">84.64</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T5.4.10.8.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">98.36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T5.4.10.8.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.10.8.4.1\">97.91</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T5.4.10.8.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.10.8.5.1\">84.32</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T5.4.10.8.6\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.10.8.6.1\">0.04</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 116 |
+
"capture": "Table 5: Performance comparison of the proposed method with a number of alternatives on the STARE [56] dataset."
|
| 117 |
+
}
|
| 118 |
+
},
|
| 119 |
+
"image_paths": {
|
| 120 |
+
"1": {
|
| 121 |
+
"figure_path": "2407.18970v3_figure_1.png",
|
| 122 |
+
"caption": "Figure 1: Block diagram of the proposed methodology. The figure depicts the overall proposed methodology, detailing the Encoder, Bottleneck, and Decoder Blocks. The detailed layers and operations of the Partial Decoder and Inverse Addition Attention blocks are given in the respective sections. The figure also explains what each symbol in the flow chart means.",
|
| 123 |
+
"url": "http://arxiv.org/html/2407.18970v3/x1.png"
|
| 124 |
+
},
|
| 125 |
+
"2": {
|
| 126 |
+
"figure_path": "2407.18970v3_figure_2.png",
|
| 127 |
+
"caption": "Figure 2: Qualitative results of the proposed method on some sample images from the DRIVE [47] dataset. The columns from left to right show the query image, segmentation mask (ground truth), and the mask predicted by the model and analytic mask respectively. The green pixels in the analytic mask represent the correctly segmented pixels while the red pixels are the false negatives and the blue pixels are the false positives.",
|
| 128 |
+
"url": "http://arxiv.org/html/2407.18970v3/x2.png"
|
| 129 |
+
},
|
| 130 |
+
"3": {
|
| 131 |
+
"figure_path": "2407.18970v3_figure_3.png",
|
| 132 |
+
"caption": "Figure 3: Qualitative results of the proposed method on some sample images from the CHASEDB1 [48] dataset. The columns from left to right show the query image, segmentation mask (ground truth), and the mask predicted by the model and analytic mask respectively. The green pixels in the analytic mask represent the correctly segmented pixels while the red pixels are the false negatives and the blue pixels are the false positives.",
|
| 133 |
+
"url": "http://arxiv.org/html/2407.18970v3/x3.png"
|
| 134 |
+
}
|
| 135 |
+
},
|
| 136 |
+
"validation": true,
|
| 137 |
+
"references": [
|
| 138 |
+
{
|
| 139 |
+
"1": {
|
| 140 |
+
"title": "doi:10.1109/ACCESS.2019.2953259.",
|
| 141 |
+
"author": "A. Khawaja, T. M. Khan, K. Naveed, S. S. Naqvi, N. U. Rehman, S. Junaid Nawaz, An improved retinal vessel segmentation framework using Frangi filter coupled with the probabilistic patch based denoiser, IEEE Access 7 (2019) 164344\u2013164361.",
|
| 142 |
+
"venue": null,
|
| 143 |
+
"url": "https://doi.org/10.1109/ACCESS.2019.2953259"
|
| 144 |
+
}
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"2": {
|
| 148 |
+
"title": "doi:10.1109/JBHI.2018.2872813.",
|
| 149 |
+
"author": "Z. Yan, X. Yang, K. Cheng, A Three-Stage Deep Learning Model for Accurate Retinal Vessel Segmentation, IEEE Journal of Biomedical and Health Informatics 23 (4) (2019) 1427\u20131436.",
|
| 150 |
+
"venue": null,
|
| 151 |
+
"url": "https://doi.org/10.1109/JBHI.2018.2872813"
|
| 152 |
+
}
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"3": {
|
| 156 |
+
"title": "doi:10.3390/jcm8091446.",
|
| 157 |
+
"author": "M. Arsalan, M. Oqais, tahir Mahmood, S. W. Cho, K. R. Park, Aiding the diagnosis of diabetic and hypertensive retinopathy using artificial intelligence-based semantic segmentation, Journal of Clinical Medicine 8 (9) (2019).",
|
| 158 |
+
"venue": null,
|
| 159 |
+
"url": "https://doi.org/10.3390/jcm8091446"
|
| 160 |
+
}
|
| 161 |
+
},
|
| 162 |
+
{
|
| 163 |
+
"4": {
|
| 164 |
+
"title": "doi:10.1109/TMI.2021.3111679.",
|
| 165 |
+
"author": "J. Wei, G. Zhu, Z. Fan, J. Liu, Y. Rong, J. Mo, W. Li, X. Chen, Genetic u-net: Automatically designed deep networks for retinal vessel segmentation using a genetic algorithm, IEEE Transactions on Medical Imaging 41 (2) (2022) 292\u2013307.",
|
| 166 |
+
"venue": null,
|
| 167 |
+
"url": "https://doi.org/10.1109/TMI.2021.3111679"
|
| 168 |
+
}
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"5": {
|
| 172 |
+
"title": "doi:10.1109/ICCV.2019.00140.",
|
| 173 |
+
"author": "A. Howard, M. Sandler, B. Chen, W. Wang, L.-C. Chen, M. Tan, G. Chu, V. Vasudevan, Y. Zhu, R. Pang, H. Adam, Q. Le, Searching for MobileNetV3, in: IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 1314\u20131324.",
|
| 174 |
+
"venue": null,
|
| 175 |
+
"url": "https://doi.org/10.1109/ICCV.2019.00140"
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"6": {
|
| 180 |
+
"title": "doi:10.1109/ISBI48211.2021.9434086.",
|
| 181 |
+
"author": "Y. Li, E. Chouzenoux, B. Charmettant, B. Benatsou, J.-P. Lamarque, N. Lassau, Lightweight u-net for lesion segmentation in ultrasound images, in: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), 2021, pp. 611\u2013615.",
|
| 182 |
+
"venue": null,
|
| 183 |
+
"url": "https://doi.org/10.1109/ISBI48211.2021.9434086"
|
| 184 |
+
}
|
| 185 |
+
}
|
| 186 |
+
],
|
| 187 |
+
"url": "http://arxiv.org/html/2407.18970v3"
|
| 188 |
+
}
|
20240921/2408.11926v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2408.13140v3.json
ADDED
|
@@ -0,0 +1,491 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Verification of Geometric Robustness of Neural Networks via Piecewise Linear Approximation and Lipschitz Optimisation",
|
| 3 |
+
"abstract": "We address the problem of verifying neural networks against\ngeometric transformations of the input image, including rotation,\nscaling, shearing, and translation. The proposed method computes\nprovably sound piecewise linear constraints for the pixel values\nby using sampling and linear approximations in combination with\nbranch-and-bound Lipschitz optimisation. The method\nobtains provably tighter over-approximations of the perturbation\nregion than the present state-of-the-art.\nWe report results from\nexperiments on a comprehensive set of verification benchmarks on MNIST and CIFAR10.\nWe show that\nour proposed implementation resolves up to 32% more verification cases than\npresent approaches.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Neural networks as used in mainstream applications, including computer\nvision, are known to be fragile and susceptible to adversarial\nattacks [18 ###reference_b18###]. The area of formal\nverification of neural networks is concerned with the development of\nmethods to establish whether a neural network is robust, with\nrespect to its classification output, to variations of the image.\nA large body of literature has so far focused on norm-bounded input\nperturbations, aiming to demonstrate that imperceptible adversarial alterations of the pixels cannot alter the classifier\u2019s\nclassification ( robustness).\nIn safety-critical applications such as autonomous driving, however, resistance to norm-bounded perturbations is inadequate to guarantee safe deployment.\nIn fact, image classifiers need to be\nrobust against a number of variations of the image, including\ncontrast, luminosity, hue, and beyond.\nA particularly important class\nof specifications concerns robustness to geometric\nperturbations of the input\nimage [23 ###reference_b23###, 28 ###reference_b28###, 33 ###reference_b33###, 1 ###reference_b1###]. These\nmay include translation, shearing, scaling, and rotation.\nOwing to the highly nonlinear variations of the\npixels in geometric transformations, verifying robustness to these perturbations\nis intrinsically a much harder problem than robustness.\nPrevious work over-approximates these variations through hyper-rectangles [33 ###reference_b33###] or pairs of linear bounds over the pixel values [1 ###reference_b1###],\nhence failing to capture most of the complexity of the perturbation region.\nDeveloping more precise methods for verifying geometric\nrobustness remains an open challenge. In this paper we work towards this end. Specifically, we make\nthree contributions:\nWe present a piecewise linear relaxation method to approximate the set\nof images generated by geometric transformations, including rotation,\ntranslation, scaling, and shearing. This construction can incorporate\nprevious approaches [33 ###reference_b33###, 1 ###reference_b1###]\nas special cases while supporting additional constraints, allowing\nsignificantly tighter over-approximations of the perturbation region.\nWe show that sound piecewise linear constraints, the building blocks of the proposed relaxation, can be generated\nvia suitable modifications of a previous\napproach [1 ###reference_b1###] that generates linear\nconstraints using sampling, linear and Lipschitz optimisation. We\nderive formal results as well as effective heuristics that enable us\nto improve the efficiency of the linear and Lipschitz optimisations in\nthis context\n(cf. Propositions 1 ###reference_position1###\u20143 ###reference_position3###). As\nwe demonstrate, the resulting piecewise constraints can be readily\nused within existing tight neural network verifiers.\nWe introduce an efficient implementation for the verification method\nabove and discuss experimental results showing considerable gains in\nterms of verification accuracy on a comprehensive set of benchmark networks.\nThe rest of this paper is organized as follows: Section 2 ###reference_### discusses related work. In Section 3 ###reference_### we\nintroduce the problem of verifying neural networks against geometric\nrobustness properties. In Section 4 ###reference_### we present our\nnovel piecewise linear approximation strategy via sampling,\noptimisation and shifting. In Section 5 ###reference_### we discuss\nthe experimental results obtained and contrast the present method\nagainst the state-of-the-art on benchmark networks. We\nconclude in Section 6 ###reference_###. Our code is publicly available on GitHub111https://github.com/benbatten/PWL-Geometric-Verification."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": "We here briefly discuss related work from -based\nneural network verification, geometric robustness and\nformal verification thereof."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Geometric robustness verification",
|
| 21 |
+
"text": "Our main contribution is a new piecewise linear relaxation of\ngeometric transformations to verify robustness of neural networks to geometric perturbations. We here introduce relevant notation in the\nverification problem and present the geometric attack model."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "Piecewise linear formulation",
|
| 27 |
+
"text": "As mentioned above, the pixel value function\n at location is generally\nnonlinear and nonsmooth with respect to the transformation parameters\n. This is one source of difficulty for solving the\nverification problem (1 ###reference_###). In this section, we\nintroduce a new convex relaxation method to derive tight\nover-approximations of .\n###figure_1###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "Piecewise linear bounds",
|
| 33 |
+
"text": "Deriving an interval bound for each pixel , i.e., , for all and lower and upper bounds , is arguably the simplest way to get a convex relaxation [33 ###reference_b33###, 23 ###reference_b23###]. However, even a small geometric transformation can lead to a large interval bound, making this approach too loose for effective verification.\nThis naive interval bound approach has been extended in [1 ###reference_b1###], where linear lower and upper bounds were used for each pixel value, i.e.,\nThe linear bounds (3 ###reference_###), however, can be still too\nloose to approximate the nonlinear function\n (see Figure 1 ###reference_### for illustration). Our key\nidea is to use piecewise linear bounds to approximate the pixel\nvalues:\n, where is the number of piecewise segments, define\nthe piecewise linear lower bound, and define the piecewise\nlinear upper bound.\nWe remark that the pixel values constrained by (4 ###reference_###) form a convex set.\nFurthermore, our approach can include the strategies in [33 ###reference_b33###, 1 ###reference_b1###] as special cases.\nEmploying the relative constraints among the piecewise segments will result in a tighter set.\nFor each pixel value, we would like to derive optimal and sound piecewise linear bounds by minimizing the approximation error. Specifically, we aim to compute the lower bound via\nComputing the upper bound for (4 ###reference_###) is similar. This optimisation\nproblem (5 ###reference_###) is highly nontrivial to solve\nsince\nthe integral cost function is hard to evaluate due to the nonlinearity of .\nMotivated by [1 ###reference_b1###], we first\nsample the transformation parameter from to\nobtain the sampled pixel values , and\nthen solve a sampled version\nof (5 ###reference_###). The resulting piecewise\nbound is guaranteed to be sound on the sampling points but could be unsound on non-sampled points. To derive a\nfinal sound piecewise bounds for ,\nwe bound the maximum violation over the entire\n using a branch-and-bound Lipschitz optimisation\nprocedure."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.2",
|
| 37 |
+
"parent_section_id": "4",
|
| 38 |
+
"section_name": "Linear optimisation based on sampling points",
|
| 39 |
+
"text": "Here, we first randomly select transformation parameters , , to obtain a sampled version of (5 ###reference_###) as follows\n\\linenomathNonumbers\nWe denote the optimal cost value of (6 ###reference_###) as .\nIn (6 ###reference_###), the number of piecewise\nlinear segments is fixed a priori.\nStill, problem (6 ###reference_###) is nontrivial to solve\njointly for all piecewise segments unless \n(where (6 ###reference_###) is reduced to a single\nlinear program). One difficulty is to determine the effective domain\nof each piecewise linear segment.\nTo alleviate this, we propose to split\nthe whole domain into sub-domains , and then optimize each piecewise linear\nsegment over , , individually.\nWe then use\nthe following independent linear programs to approximate the\nsolution to (6 ###reference_###):\nfor . Note that in\n(7 ###reference_###), we minimise the\napproximation error over only the sample points within a given domain\n; however, we force each segment to satisfy the constraints at every sample point over the whole domain.\nWe have the following result for the quality of the solution from (7 ###reference_###).\nGiven any subdomains , , the\noptimal solutions , , to (7 ###reference_###) are\nsuboptimal to (6 ###reference_###), i.e.,\n. There exists a set of\nsubdomains , , such that the optimal\nsolutions to (6 ###reference_###)\nand (7 ###reference_###) are identical,\ni.e., .\nConsider the piecewise linear function in the objective function (6 ###reference_###). Let be the effective piecewise domain of the th segment, i.e.,\nThen, the objective function (6 ###reference_###) can be equivalently written into\nTherefore, (6 ###reference_###) is equivalent to\n\\linenomathNonumbers\nNote that the piecewise domains are determined by the linear segments implicitly in (8 ###reference_###). We need to simultaneously optimize the choices of in (10 ###reference_###), making it computationally hard to solve.\nA suboptimal solution for (10 ###reference_###) is to a priori fix the effective domain and optimize over only, i.e.,\n\\linenomathNonumbers\nwhich is decoupled into individually linear programs,\nTherefore, it is clear that . On the other hand, suppose the optimal solution to (6 ###reference_###) leads to the optimal effective domains in (8 ###reference_###). Then, using this set , the decoupled linear programs (11 ###reference_###) are equivalent to (10 ###reference_###) and (6 ###reference_###).\n\u220e\nTo obtain a good solution (6 ###reference_###),\nchoosing the subdomains becomes essential. A uniform\ngrid partition is one, naive choice. Another is to partition the\nsubdomains based on the distribution of the sampling points\n.\nThe details of the splitting procedure are provided in the appendix.\n(Explicit input splitting vs. piecewise linear constraints)\nWe note that one can perform explicit input splitting\n and verify each of them by\nsolving (1 ###reference_###) separately in order to certify\nthe original large domain . The main drawback of this\nexplicit input splitting is that we need to call a verifier for\neach subdomain which can be hugely time consuming and not scalable. On the contrary, it only requires to solve multiple small linear\nprograms (7 ###reference_###) to derive\nour piece-wise linear constraints. Then, we only need to call a\nverifier once to solve the verification\nproblem (1 ###reference_###) over .\nFor tight verifiers, such as those mentioned in Section 2 ###reference_###, this\nprocess is much more efficient than explicit input splitting."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.3",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "Lipschitz optimisation for obtaining sound piecewise linear bounds",
|
| 45 |
+
"text": "The piecewise linear constraints\nfrom (7 ###reference_###) are valid for the\nsampling points . To make\nthe constraints sound over all , we must shift\nthem such that all points on the pixel value function,\n, satisfy the constraints in\n(4 ###reference_###). For this, we define a new function that\ntracks the violation of a piecewise bound over the entire domain\n:\nwhere Then, we naturally have a sound piecewise linear lower bound as\nHowever, computing the exact maximum is computationally hard\ndue to the nonconvexity, nonlinearity and nonsmoothness of . Instead, given any , we can use a branch-and-bound Lipschitz optimisation procedure to find satisfying\nTo establish the branch-and-bound Lipschitz optimisation procedure, we need to\ncharacterise the properties of the violation function .\nThe violation function is nonconvex, nonsmooth, and Lipschitz continuous\nover . Furthermore, there exist\n, , such that\nThe pixel value function is given by\n\nWe know that the spatial transformation and are continuous and differentiable everywhere. The interpolation function is continuous everywhere, but it is only differentiable within each interpolation region and it can be nonsmooth on the boundary. Also, and are generally nonconvex.\nIn addition, the piecewise linear function\n\nis continuous but not differentiable everywhere. Therefore, the violation function is nonconvex and nonsmooth in general. Finally, all the functions , , and are Lipschitz continuous, so is the violation function . Thus, there exist\n, , such that (13 ###reference_###) holds.\n\u220e\nThe properties of the violation function in Proposition 2 ###reference_position2### are directly inherited from nonconvexity and nonsmoothness of the interpolation function . The Lipschitz continuity is also from the interpolation function and the piecewise linear function.\nWith the information of in (13 ###reference_###), we are\nready to get a lower and an upper bound for upon evaluating\nthe function at any point :\nwhere denotes the difference of the lower and upper bound in each box constraint of . These lower and upper bounds (14 ###reference_###) are useful in the branch-and-bound procedure.\nStill, we need estimate the Lipschitz constant in (13 ###reference_###). In our work, we show how to estimate the constant based on the gradient of whenever it is differentiable (note that is not differentiable everywhere)\nLet be the subset of \nwhere is differentiable. Then, the Lipschitz constants in (13 ###reference_###) can be chosen as\n\nwhere is a basis vector with only the -th element being one and the rest being zero.\nThis proof is motivated by [21 ###reference_b21###]. In order to prove Proposition 3 ###reference_position3###, we first state a useful result from [21 ###reference_b21###, Lemma 3]. Let be Lipschitz continuous over an open set . We denote as the subset of where is differentiable. We also let be the set of for which the directional derivative, , exists and . Finally, we let be the set\n\nThen, we have the following inequality [21 ###reference_b21###, Lemma 3]\nWe now proceed to prove Proposition 3 ###reference_position3###.\nFix any , and we define a function as\n\nSince is Lipschitz continuous in , it is clear that is Lipschitz continuous on the interval . Thus, by Rademacher\u2019s Theorem, is differentiable everywhere except for a set of measure zero.\nWe can further define a Lebesgue integrable function that equal to almost everywhere as follows\nNote that if is differentiable at some point, we have\nThen we have the following inequalities\nFurthermore, considering the inequality in (15 ###reference_###) [21 ###reference_b21###, Lemma 3], we have\nwhere is a basis vector with only the -th element being one and the rest being zero. Therefore, the Lipschitz constants in (13 ###reference_###) can be chosen as\n\n\u220e\nMaximum directional gradient. To bound the maximum violation in (12 ###reference_###) using (14 ###reference_###), we need to estimate the constant , and Proposition 3 ###reference_position3### requires us to calculate the maximum directional gradient . Each component of varies independently with respect to any constituent of the transformation composition, , . Each depends only on a transformation, , and interpolation, . The only component that is not differentiable everywhere in the parameter space , is interpolation - this due to it being disjoint across interpolation regions. We overcome this by calculating the interpolation gradient, separately in each interpolation region, and taking the maximum interval of gradients from the union, , where are the relevant interpolation regions, and . Computing a bound on this way mirrors the IBP-based procedure outlined in [1 ###reference_b1###]. With this we can calculate an upper bound on to be applied in the Lipschitz algorithm.\nBranch-and-bound Lipschitz optimisation procedure. Similar to [1 ###reference_b1###], we use a branch-and-bound procedure (See Appendix) where and are given as inputs alongside the Lipschitz error, , and samples per subdomain, . The procedure first samples the violation function , obtaining maximum value candidates, this is placed in a list of 3-tuples with the upper bound, , and corresponding domain, . The key upper bound operation is obtained using (14 ###reference_###).\nWe then check whether each 3-tuple in our list meets the termination criteria, as parameterised by . If the requirement is satisfied for all elements then we terminate and return . Until the requirement is met for every list element we iteratively split unsatisfied subdomains. This process is repeated until a satisfactory maximum candidate is found, splitting in each iteration. We can ignore any sub-domain, , of where the function bound in is smaller than a maximum value candidate in any other sub-domain. Deciding how to split subdomains is non-trivial for higher dimensional parameter spaces. In the case we need only decide where to split on a single axis; for which we use the domain midpoint. The crux of our algorithm is approximating the gradient of when it is differentiable, as stated in Proposition 3 ###reference_position3### (see appendix for further details on the branch-and-bound procedure). For bounding the violation of piecewise linear bounds we can consider the piecewise bound itself to be made of linear sub-regions with each one bounded by the intersection with the neighbouring linear piece - or the lower and upper bounds on the transformation parameters. We can then bound the Lipschitz constant in the same way as for a single linear bound, instead starting with sub-domains. Solving the Lipschitz bounding procedure for each linear segment over only its local domain in this way enables us to bound the Lipschitz constant of a piecewise linear bound in the same time as a linear bound takes."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "5",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Experimental Evaluation",
|
| 51 |
+
"text": "In this section we present three sets of results: (i) a quantitative\nstudy directly comparing the model-agnostic bounds produced by our\npiecewise linear approach against the state-of-the-art linear\nbounds [1 ###reference_b1###], (ii) an empirical evaluation of\nverification results obtained using linear and piecewise linear\nbounds, without input splitting and using the same neural network\nverifier [3 ###reference_b3###], and (iii) a comparison\nof our results against the present\nstate-of-the-art method [1 ###reference_b1###]."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5.1",
|
| 55 |
+
"parent_section_id": "5",
|
| 56 |
+
"section_name": "Experimental setup",
|
| 57 |
+
"text": "We consider the MNIST image recognition dataset [25 ###reference_b25###] and CIFAR10 [24 ###reference_b24###].\nIn line with the previous literature [1 ###reference_b1###],\nwe use two fully-connected ReLU networks, MLP2 and MLP6, and one\nconvolutional ReLU network, CONV, from the first competition for\nneural network verification (VNN-COMP) [38 ###reference_b38###]. The\nfully-connected networks comprise 2 and 6 layers respectively. Each\nlayer of each of the networks has 256 ReLU nodes. The convolutional\nnetwork comprises two layers. The first layer has 32 filters of size\n, a padding of 2 and strides of 2. The second layer has 64\nfilters of size of , a padding of 2 and strides of 1.\nAdditionally, we employ a larger convolutional ReLU network from\nrelevant previous work [1 ###reference_b1###], composed of\nthree layers: a convolutional layer with 32 filters of size \nand strides of 2, a convolutional layer with 64 filters of size\n and strides of 2, and a fully connected layer with 200\nnodes. All experiments were carried out on an Intel Core\ni9-10940X (3.30GHz, 28 cores) equipped with 256GB RAM and running\nLinux kernel 5.12. DeepG experiment ANS: we do not use GPU in these experiments.\nOnce a convex over-approximation of the attack space is computed, (cf. Section 3 ###reference_###) a neural network verifier is required to provide a lower bound on problem (1 ###reference_###).\nUnless stated otherwise, the verification results reported in this work are obtained using VENUS, a complete MILP-based verification toolkit for feed-forward neural networks [3 ###reference_b3###].\n###figure_2### ###figure_3### ###table_1###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.2",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "Experimental results",
|
| 63 |
+
"text": "In the following, we will use \u201cL\" to denote the linear relaxation from equation (3 ###reference_###), and \u201cPWL\" to denote the piecewise linear relaxation from equation (4 ###reference_###).\nFigure 2 ###reference_### is a direct comparison of bound tightness between our piecewise linear bounds and the current state-of-the-art linear bounds [1 ###reference_b1###]. For each image, linear and piecewise linear bounds are generated, each one capturing the reachable pixel values for a given transformation. We always use two piecewise segments () and use a Lipschitz error of 0.01 to compute bounds.\nThe area enclosed by each set of bounds is then calculated and averaged for every pixel over all images. In each case the piecewise linear bounds are guaranteed to be tighter (enclose a smaller area) than the linear bounds, as in Section 4 ###reference_###.\nFigure 2 ###reference_### shows the relative area\n(specifically, with\n and being the volume enclosed by the\npiecewise linear and linear bounds, respectively) of the two bound\ntypes. In Figure 2 ###reference_###, there is an initial\nincrease in relative tightness for all transformations \u2013 this is a\nresult of linear bounds being unable to efficiently capture the\nincreasing nonlinearity in the pixel value curve,\n. After an initial increase, the behaviour for different transformations diverges. For rotation, the relative advantage of the piecewise bounds continues to increase up to 15 degrees. For scaling, however, there is a peak at 1.25 magnification, followed by a decrease in the relative tightness. This result is explained by a corresponding increase in the complexity of the pixel value curve. Notably, the piecewise bounds are best suited to nonmonotonic pixel\nvalue curves with a single, sharp vertex. For curves with many\nvertices and large fluctuations, piecewise linear bounds become\nincreasingly linear (the gradient of the pieces converge) to maintain\nconvexity. Though this is the case for , as we study here, for\nlarger numbers of piecewise segments the advantage over linear bounds\nwill continue to hold, as the piecewise bounds approximate the convex\nhull of the pixel values for .\nThe plots for shearing and translation show a similar pattern to scaling. Although the relative tightness may decrease for larger transformations, the total bounded area increases, making any proportional reduction in area more significant.\nTable 1 ###reference_### reports the experimental results obtained for verification queries using VENUS, on the VNN-COMP networks. For each type of input bound \u2013 piecewise linear and linear \u2013 the table shows the percentage of certified images (Verified column), the percentage of images for which a valid counter example was found (Falsified column), and the average verification time.\nWe verify the robustness of each of the networks with respect to one of four transformations - rotation, scaling, shearing, or translation - on 50 randomly selected images from the MNIST test set. For each verification query we use a timeout of 30 minutes.\nWe observe a considerable performance advantage using piecewise linear bounds for the convolution network, in every case, at least doubling the count of verifiable images.\nFor the 6-layer MLP network, many of the transformations tried could not be verified, leading to numerous counter examples and time-outs. However, for every transformation the piecewise linear bounds were able to find more counter examples than linear bounds \u2013 this is a result of the improved tightness of piecewise linear bounds. For the 2-layer MLP, results across the bound types are very similar, in some cases they are equal. This is due to two factors, both of which stem from the network\u2019s small size. Firstly, the 2-layer network is the least robust of all three. Accordingly, our results are for very small transformations for which the pixel value curve is approximately linear. In these cases, linear bounds can capture the input set as well as piecewise linear bounds. Secondly, the advantage of piecewise linear bounds\u2019 tightness is compounded over each layer of a network \u2013 the 2-layer MLP is so small that this effect is minimal, further aligning the performance of the approaches.\nFinally, the use of piecewise linear constraints result in a reduction of average verification times on both the 6-layer MLP and the convolutional network: this is due to the fact that their relative tightness compensates for the additional cost of their encoding, leading the employed MILP-based verifier to positive lower bounds on the verification problem (1 ###reference_###) in less time.\nIn Table 2 ###reference_### we provide a comparison of verification results obtained using VENUS with both linear and piecewise linear constraints, with the DeepG [1 ###reference_b1###] results, obtained using linear constraints and the DeepPoly [33 ###reference_b33###] verifier, which relies on a relatively loose LP relaxation of (1 ###reference_###). Further, we use a MILP-based verifier which enabled us to add the pixel domain constraints in addition to our transformation-based bounds. This, coupled with the tighter verifier, enables our linear bounds to out-perform those from DeepG.\nWe consider MNIST and CIFAR10 benchmark presented in Balunovi\u0107 et al. [1 ###reference_b1###]. The MNIST example consists of verifying a 30 degree rotation transformation by way of 10, 3-degree sub-problems.\nThis is in contrast to Table 1 ###reference_###, where each perturbation is represented by a single set of bounds and a single verifier call per image.\nTable 2 ###reference_### shows that, even under the small-perturbation setting, the use of tighter verification algorithms (L versus DeepG) increases the number of verified properties.\nFurthermore, we show that the method proposed in this work, PWL, leads to the tightest certification results.\nThe CIFAR10 example comprises a composition of rotation and shearing of 2 degrees and 2% respectively. This query is solved via 4 sub-problems (with each transformation domain split in half). The results show a 12% improvement for the PWL bounds over the DeepG result. However, much of this gain comes from the verifier itself. The gap between the linear bounds and their piecewise counterpart is 1%. We attribute this smaller gap to the relatively small domain over which each sub-problem runs.\nNevertheless, we point out that verifying perturbations through a series of sub-problems is extremely expensive, as it requires repeated calls to both neural network verifiers, and to the constraint-generation procedure (including the branch-and-bound-based Lipschitz optimisation).\nFor this reason, we focus on verification the setting without transformation splitting, and aim to maximize certifications through the use of tight verifiers and over-approximations of the geometric transforms."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "6",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Conclusions",
|
| 69 |
+
"text": "We have introduced a new piecewise linear approximation method for\ngeometric robustness verification. Our approach can generate\nprovably tighter convex relaxations for images obtained by geometric\ntransformations than the state-of-the-art\nmethods [33 ###reference_b33###, 1 ###reference_b1###]. Indeed, we have shown experimentally that\nthe proposed method can provide better verification precision in\ncertifying robustness against geometric transformations than prior\nwork [1 ###reference_b1###], while being more computational\nefficient.\nDespite the positive results brought by our piecewise linear approximation method, further topics deserve further exploration. Firstly, it remains challenging to obtain the optimal piecewise linear constraints via (6 ###reference_###). To get a good set of piecewise linear constraints, our current method (7 ###reference_###) requires to obtain a good heuristic partition of the domain . It will be interesting to further investigate and quantify the suboptimality of the solution from (7 ###reference_###). Second, the number of piecewise linear segment is a hyperparameter in our framework. A larger value leads to a better approximation of the pixel value function in theory; however, this also results in more linear constraints for the verification problem in practice. Future work will investigate how to choose a good value of based on the curvature of of the pixel value function."
|
| 70 |
+
}
|
| 71 |
+
],
|
| 72 |
+
"appendix": [
|
| 73 |
+
{
|
| 74 |
+
"section_id": "Appendix x1",
|
| 75 |
+
"parent_section_id": null,
|
| 76 |
+
"section_name": "Appendix",
|
| 77 |
+
"text": ""
|
| 78 |
+
},
|
| 79 |
+
{
|
| 80 |
+
"section_id": "Appendix 1",
|
| 81 |
+
"parent_section_id": null,
|
| 82 |
+
"section_name": "Appendix A Linear optimisation over sub-domains",
|
| 83 |
+
"text": "Our discussion below focuses on in (6 ###reference_###), and with this choice, we have already found promising improvements in our experiments (see the main text). With this constraint we can find suboptimal piecewise bounds by solving two independent linear optimisation problems, where each problem is applied over a subset of the piecewise domain, divided at a given sample point, . We name the parameter sub-spaces divided by , , and where . Expressing (6 ###reference_###) in this way gives\nIn (16a ###reference_.1###) and (16b ###reference_.2###) we optimise the area over over only the sample points within a piece\u2019s domain, , or ; however, we enforce the constraints at every sample point. By doing this we guarantee convexity of our piecewise constraints. We develop a heuristic to determine the sample point, , at which we split based on the error between sampled points and optimal linear bounds\nwhere is the splitting point for the lower bound. We calculate correspondingly using the lower linear bound. There exists a splitting point, , that would produce optimal piecewise bounds, but finding it is infeasible. In practice, we first compute a single linear bound for lower and upper constraints and then use this bound to compute the splitting point from 17 ###reference_###. Then, once the piecewise bound is obtained, half of the original linear bound is effectively discarded for the verification procedure. We compute the bounds in this way for two reasons: firstly, it enables us to apply our splitting heuristic in 17 ###reference_###, and secondly, it is computationally efficient in our experimental setting where we require the linear bounds for comparison."
|
| 84 |
+
},
|
| 85 |
+
{
|
| 86 |
+
"section_id": "Appendix 2",
|
| 87 |
+
"parent_section_id": null,
|
| 88 |
+
"section_name": "Appendix B Details of branch-and-bound procedure",
|
| 89 |
+
"text": "With our unsound constraints, our method closely follows that of [1 ###reference_b1###], with the important exception that we treat our single piecewise bound as two, separate linear bounds with domains, , and . We first define a function, , to track the violation of a bound by the pixel value function, . In the case that the lower bound is piecewise, we will maximise twice over , and , and once over . Maximisation of is done via a branch-and-bound Lipschitz procedure. Algorithm 1 ###reference_### shows a simplified version of the implementation we use. For each instance of , we first approximate the Lipshitz constant, , and use it to bound\nwhere is the midpoint of . We find upper bound candidates by sampling the violation function at four, evenly spaced points in ; the largest valued obtained becomes the maximum value candidate, . We aim to find a maximum value-bound pair that satisfies , with given. This process is repeated until a satisfactory maximum candidate is found, splitting in each iteration. We can ignore any sub-domain, , of where the function bound in is smaller than a maximum value candidate in any other sub-domain. This is because we can guarantee that the maximum value, in this case, is not in the sub-domain. We deal only with 1-dimensional parameter spaces for which we split at the midpoint. The outline of this procedure is given in Algorithm 1 ###reference_###.\nInput: , , , , \nOutput:"
|
| 90 |
+
}
|
| 91 |
+
],
|
| 92 |
+
"tables": {
|
| 93 |
+
"1": {
|
| 94 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Comparison of verification results for piecewise linear\nconstraints and linear constraints.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.1.1\" rowspan=\"2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.1.1.1.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.1.2\" rowspan=\"2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.1.1.2.1\">Attack</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S5.T1.1.1.1.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">Verified</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S5.T1.1.1.1.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">Falsified</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S5.T1.1.1.1.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">Time (s)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.2.2.1\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">L</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.2.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">PWL</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.2.2.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">L</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.1.2.2.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">PWL</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.2.2.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">L</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.2.2.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">PWL</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.3.3.1\" rowspan=\"4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.3.3.1.1\">MLP2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.3.3.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">R(5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.3.3.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.3.3.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.3.3.4.1\">28</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.3.3.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">74</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.1.3.3.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">72</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.3.3.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0.7</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.3.3.8\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">3.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.4.4.1\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">Sh(0.2)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.4.4.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.4.4.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.4.4.3.1\">26</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.4.4.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">80</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.1.4.4.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">74</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.4.4.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1.2</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.4.4.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.5.5.1\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">Sc(1.1)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.5.5.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.5.5.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">24</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.5.5.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">76</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.1.5.5.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">76</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.5.5.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1.1</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.5.5.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">51</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.6.6.1\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">T(0.1)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.6.6.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.6.6.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">16</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.6.6.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">84</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.1.6.6.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">84</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.6.6.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">11</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.6.6.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">54</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.7.7.1\" rowspan=\"4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.7.7.1.1\">MLP6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.7.7.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">R(15)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.7.7.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.7.7.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.7.7.4.1\">2</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.7.7.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">12</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.1.7.7.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">32</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.7.7.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1602</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.7.7.8\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1253</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.8.8.1\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">Sh(0.5)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.8.8.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.8.8.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.8.8.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">16</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.1.8.8.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.8.8.5.1\">68</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.8.8.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1591</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.8.8.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">778</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.9.9.1\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">Sc(1.3)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.9.9.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.9.9.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.9.9.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">24</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.1.9.9.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.9.9.5.1\">78</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.9.9.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1404</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.9.9.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">727</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.10.10.1\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">T(0.2)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.1.10.10.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.10.10.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.10.10.3.1\">2</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.10.10.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">26</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.1.10.10.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">74</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.10.10.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1397</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.10.10.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">648</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.11.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.T1.1.11.11.1\" rowspan=\"4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text\" id=\"S5.T1.1.11.11.1.1\">CONV</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.1.11.11.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">R(10)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.11.11.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">20</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.1.11.11.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.11.11.4.1\">48</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.11.11.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T1.1.11.11.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.11.11.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1447</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T1.1.11.11.8\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1044</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.12.12.1\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">Sh(0.2)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.12.12.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">18</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.1.12.12.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.12.12.3.1\">50</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.12.12.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.1.12.12.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.12.12.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1548</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.12.12.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1044</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.13.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.1.13.13.1\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">Sc(1.3)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.13.13.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.1.13.13.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.13.13.3.1\">10</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.13.13.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T1.1.13.13.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">4</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.13.13.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1750</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T1.1.13.13.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1663</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.14.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T1.1.14.14.1\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">T(0.15)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.1.14.14.2\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S5.T1.1.14.14.3\" style=\"padding-left:4.8pt;padding-right:4.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.14.14.3.1\">32</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.1.14.14.4\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">2</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S5.T1.1.14.14.5\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">0</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.1.14.14.6\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1767</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T1.1.14.14.7\" style=\"padding-left:4.8pt;padding-right:4.8pt;\">1397</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 95 |
+
"capture": "Table 1: Comparison of verification results for piecewise linear\nconstraints and linear constraints."
|
| 96 |
+
},
|
| 97 |
+
"2": {
|
| 98 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Comparison of L and PWL using <span class=\"ltx_text ltx_font_typewriter\" id=\"S5.T2.2.1\">VENUS</span>, with verification results taken from DeepG\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2408.13140v3#bib.bib1\" title=\"\">1</a>]</cite>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.3.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.3.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.1.1.1\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.3.1.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.1.2.1\">Transformation</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.3.1.1.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T2.3.1.1.3.1\">Accuracy (%)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S5.T2.3.1.1.4\">DeepG</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" colspan=\"2\" id=\"S5.T2.3.1.1.5\">Linear (Ours)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"2\" id=\"S5.T2.3.1.1.6\">PWL (Ours)</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.3.2.2.1\">Certified (%)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.3.2.2.2\">Certified (%)</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.3.2.2.3\">Time (s)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.3.2.2.4\">Certified (%)</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.3.2.2.5\">Time (s)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.1\">MNIST</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.2\">R(30)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.3\">99.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.4\">87.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.5\">90.8</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.3.3.1.6\">37.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.1.7\">92.9</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.3.3.1.8\">28.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.4.2.1\">CIFAR</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.4.2.2\">R(2)Sh(2)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.4.2.3\">68.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.4.2.4\">54.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.4.2.5\">65.0</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.3.4.2.6\">239.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.3.4.2.7\">66.0</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.3.4.2.8\">204.9</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 99 |
+
"capture": "Table 2: Comparison of L and PWL using VENUS, with verification results taken from DeepG\u00a0[1]."
|
| 100 |
+
}
|
| 101 |
+
},
|
| 102 |
+
"image_paths": {
|
| 103 |
+
"1": {
|
| 104 |
+
"figure_path": "2408.13140v3_figure_1.png",
|
| 105 |
+
"caption": "Figure 1: Comparison of sound and unsound piecewise (PW) linear domains (our\nwork), sound linear domain (gray area) [33], and\ninterval bounds (dashed line) [33]. The true pixel\nvalue function (the green curve) is marked for a rotation of 18\u2218.",
|
| 106 |
+
"url": "http://arxiv.org/html/2408.13140v3/x1.png"
|
| 107 |
+
},
|
| 108 |
+
"2(a)": {
|
| 109 |
+
"figure_path": "2408.13140v3_figure_2(a).png",
|
| 110 |
+
"caption": "Figure 2: A comparison of area captured by piecewise linear and linear bounds as a function of transformation parameter. Relative bound area is defined as 1\u2212(VPWL/VL)1subscript\ud835\udc49PWLsubscript\ud835\udc49L1-(V_{\\text{PWL}}/V_{\\text{L}})1 - ( italic_V start_POSTSUBSCRIPT PWL end_POSTSUBSCRIPT / italic_V start_POSTSUBSCRIPT L end_POSTSUBSCRIPT ).",
|
| 111 |
+
"url": "http://arxiv.org/html/2408.13140v3/extracted/5830938/area_comparisons_scale.png"
|
| 112 |
+
},
|
| 113 |
+
"2(b)": {
|
| 114 |
+
"figure_path": "2408.13140v3_figure_2(b).png",
|
| 115 |
+
"caption": "Figure 2: A comparison of area captured by piecewise linear and linear bounds as a function of transformation parameter. Relative bound area is defined as 1\u2212(VPWL/VL)1subscript\ud835\udc49PWLsubscript\ud835\udc49L1-(V_{\\text{PWL}}/V_{\\text{L}})1 - ( italic_V start_POSTSUBSCRIPT PWL end_POSTSUBSCRIPT / italic_V start_POSTSUBSCRIPT L end_POSTSUBSCRIPT ).",
|
| 116 |
+
"url": "http://arxiv.org/html/2408.13140v3/extracted/5830938/area_comparisons_trans.png"
|
| 117 |
+
}
|
| 118 |
+
},
|
| 119 |
+
"validation": true,
|
| 120 |
+
"references": [
|
| 121 |
+
{
|
| 122 |
+
"1": {
|
| 123 |
+
"title": "Certifying geometric robustness of neural networks.",
|
| 124 |
+
"author": "M. Balunovi\u0107, M. Baader, G. Singh, T. Gehr, and M. Vechev.",
|
| 125 |
+
"venue": "NeurIPS19, 2019.",
|
| 126 |
+
"url": null
|
| 127 |
+
}
|
| 128 |
+
},
|
| 129 |
+
{
|
| 130 |
+
"2": {
|
| 131 |
+
"title": "Efficient neural network verification via layer-based semidefinite\nrelaxations and linear cuts.",
|
| 132 |
+
"author": "B. Batten, P. Kouvaros, A. Lomuscio, and Y. Zheng.",
|
| 133 |
+
"venue": "In IJCAI21, pages 2184\u20132190, 2021.",
|
| 134 |
+
"url": null
|
| 135 |
+
}
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"3": {
|
| 139 |
+
"title": "Efficient verification of relu-based neural networks via dependency\nanalysis.",
|
| 140 |
+
"author": "E. Botoeva, P. Kouvaros, J. Kronqvist, A. Lomuscio, and R. Misener.",
|
| 141 |
+
"venue": "In AAAI20, volume 34, pages 3291\u20133299, 2020.",
|
| 142 |
+
"url": null
|
| 143 |
+
}
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"4": {
|
| 147 |
+
"title": "Lagrangian decomposition for neural network verification.",
|
| 148 |
+
"author": "R. Bunel, A. De Palma, A. Desmaison, K. Dvijotham, P. Kohli, P. H. Torr, and\nM. P. Kumar.",
|
| 149 |
+
"venue": "In Conference on Uncertainty in Artificial Intelligence,\n2020a.",
|
| 150 |
+
"url": null
|
| 151 |
+
}
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"5": {
|
| 155 |
+
"title": "Branch and bound for piecewise linear neural network verification.",
|
| 156 |
+
"author": "R. Bunel, I. Turkaslan, P. Torr, M. P. Kumar, J. Lu, and P. Kohli.",
|
| 157 |
+
"venue": "Journal of Machine Learning Research, 21, 2020b.",
|
| 158 |
+
"url": null
|
| 159 |
+
}
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"6": {
|
| 163 |
+
"title": "Enabling certification of verification-agnostic networks via\nmemory-efficient semidefinite programming.",
|
| 164 |
+
"author": "S. Dathathri, K. Dvijotham, A. Kurakin, A. Raghunathan, J. Uesato, R. Bunel,\nS. Shankar, J. Steinhardt, I. Goodfellow, P. Liang, and K. Pushmeet.",
|
| 165 |
+
"venue": "NeurIPS20, 2020.",
|
| 166 |
+
"url": null
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"7": {
|
| 171 |
+
"title": "Scaling the convex barrier with active sets.",
|
| 172 |
+
"author": "A. De Palma, H. S. Behl, R. Bunel, P. H. S. Torr, and M. P. Kumar.",
|
| 173 |
+
"venue": "In International Conference on Learning Representations, 2021.",
|
| 174 |
+
"url": null
|
| 175 |
+
}
|
| 176 |
+
},
|
| 177 |
+
{
|
| 178 |
+
"8": {
|
| 179 |
+
"title": "A dual approach to scalable verification of deep networks.",
|
| 180 |
+
"author": "K. Dvijotham, R. Stanforth, S. Gowal, T. Mann, and P. Kohli.",
|
| 181 |
+
"venue": "In Conference on Uncertainty in Artificial Intelligence, 2018.",
|
| 182 |
+
"url": null
|
| 183 |
+
}
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"9": {
|
| 187 |
+
"title": "Formal verification of piece-wise linear feed-forward neural\nnetworks.",
|
| 188 |
+
"author": "R. Ehlers.",
|
| 189 |
+
"venue": "In ATVA17, volume 10482, pages 269\u2013286. Springer, 2017.",
|
| 190 |
+
"url": null
|
| 191 |
+
}
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"10": {
|
| 195 |
+
"title": "Exploring the landscape of spatial robustness.",
|
| 196 |
+
"author": "L. Engstrom, B. Tran, D. Tsipras, L. Schmidt, and A. Madry.",
|
| 197 |
+
"venue": "In ICML19, volume 97, pages 1802\u20131811. PMLR, 2019.",
|
| 198 |
+
"url": null
|
| 199 |
+
}
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"11": {
|
| 203 |
+
"title": "The robustness of deep networks: A geometrical perspective.",
|
| 204 |
+
"author": "A. Fawzi, S. Moosavi-Dezfooli, and P. Frossard.",
|
| 205 |
+
"venue": "IEEE Signal Processing Magazine, 34(6):50\u201362, 2017.",
|
| 206 |
+
"url": null
|
| 207 |
+
}
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"12": {
|
| 211 |
+
"title": "Safety verification and robustness analysis of neural networks via\nquadratic constraints and semidefinite programming.",
|
| 212 |
+
"author": "M. Fazlyab, M. Morari, and G. J. Pappas.",
|
| 213 |
+
"venue": "IEEE TACON20, pages 1\u20131, 2020.",
|
| 214 |
+
"url": null
|
| 215 |
+
}
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"13": {
|
| 219 |
+
"title": "Complete verification via multi-neuron relaxation guided\nbranch-and-bound.",
|
| 220 |
+
"author": "C. Ferrari, M. N. Mueller, N. Jovanovi\u0107, and M. Vechev.",
|
| 221 |
+
"venue": "In International Conference on Learning Representations, 2022.",
|
| 222 |
+
"url": null
|
| 223 |
+
}
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"14": {
|
| 227 |
+
"title": "Certified defense to image transformations via randomized smoothing.",
|
| 228 |
+
"author": "M. Fischer, M. Baader, and M. Vechev.",
|
| 229 |
+
"venue": "arXiv preprint arXiv:2002.12463, 2020.",
|
| 230 |
+
"url": null
|
| 231 |
+
}
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"15": {
|
| 235 |
+
"title": "Scalable certified segmentation via randomized smoothing.",
|
| 236 |
+
"author": "M. Fischer, M. Baader, and M. Vechev.",
|
| 237 |
+
"venue": "In ICML21, pages 3340\u20133351. PMLR, 2021.",
|
| 238 |
+
"url": null
|
| 239 |
+
}
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"16": {
|
| 243 |
+
"title": "Fuzz testing based data augmentation to improve robustness of deep\nneural networks.",
|
| 244 |
+
"author": "X. Gao, R. K. Saha, M. R. Prasad, and A. Roychoudhury.",
|
| 245 |
+
"venue": "In ICSE20, pages 1147\u20131158, 2020.",
|
| 246 |
+
"url": null
|
| 247 |
+
}
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"17": {
|
| 251 |
+
"title": "Ai2: Safety and robustness certification of neural networks with\nabstract interpretation.",
|
| 252 |
+
"author": "T. Gehr, M. Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri, and\nM. Vechev.",
|
| 253 |
+
"venue": "In 2018 IEEE Symposium on Security and Privacy (SP), 2018.",
|
| 254 |
+
"url": null
|
| 255 |
+
}
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"18": {
|
| 259 |
+
"title": "Explaining and harnessing adversarial examples.",
|
| 260 |
+
"author": "I. J. Goodfellow, J. Shlens, and C. Szegedy.",
|
| 261 |
+
"venue": "arXiv preprint arXiv:1412.6572, 2014.",
|
| 262 |
+
"url": null
|
| 263 |
+
}
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"19": {
|
| 267 |
+
"title": "Deepsplit: An efficient splitting method for neural network\nverification via indirect effect analysis.",
|
| 268 |
+
"author": "P. Henriksen and A. Lomuscio.",
|
| 269 |
+
"venue": "In Proceedings of the 30th International Joint Conference on\nArtificial Intelligence (IJCAI21), 2021.",
|
| 270 |
+
"url": null
|
| 271 |
+
}
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"20": {
|
| 275 |
+
"title": "Spatial transformer networks.",
|
| 276 |
+
"author": "M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu.",
|
| 277 |
+
"venue": "arXiv preprint arXiv:1506.02025, 2015.",
|
| 278 |
+
"url": null
|
| 279 |
+
}
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"21": {
|
| 283 |
+
"title": "Exactly computing the local lipschitz constant of relu networks.",
|
| 284 |
+
"author": "M. Jordan and A. G. Dimakis.",
|
| 285 |
+
"venue": "arXiv preprint arXiv:2003.01219, 2020.",
|
| 286 |
+
"url": null
|
| 287 |
+
}
|
| 288 |
+
},
|
| 289 |
+
{
|
| 290 |
+
"22": {
|
| 291 |
+
"title": "Geometric robustness of deep networks: Analysis and improvement.",
|
| 292 |
+
"author": "C. Kanbak, S. Moosavi-Dezfooli, and P. Frossard.",
|
| 293 |
+
"venue": "In CVPR18, June 2018.",
|
| 294 |
+
"url": null
|
| 295 |
+
}
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"23": {
|
| 299 |
+
"title": "Formal verification of cnn-based perception systems.",
|
| 300 |
+
"author": "P. Kouvaros and A. Lomuscio.",
|
| 301 |
+
"venue": "arXiv preprint arXiv:1811.11373, 2018.",
|
| 302 |
+
"url": null
|
| 303 |
+
}
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"24": {
|
| 307 |
+
"title": "Learning multiple layers of features from tiny images.",
|
| 308 |
+
"author": "A. Krizhevsky.",
|
| 309 |
+
"venue": "2009.",
|
| 310 |
+
"url": null
|
| 311 |
+
}
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"25": {
|
| 315 |
+
"title": "The mnist database of handwritten digits, 1998.",
|
| 316 |
+
"author": "Y. LeCun, C. Cortes, and C. J. Burges.",
|
| 317 |
+
"venue": null,
|
| 318 |
+
"url": null
|
| 319 |
+
}
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"26": {
|
| 323 |
+
"title": "Tss: Transformation-specific smoothing for robustness certification,\n2021.",
|
| 324 |
+
"author": "L. Li, M. Weber, X. Xu, L. Rimanic, B. Kailkhura, T. Xie, C. Zhang, and B. Li.",
|
| 325 |
+
"venue": null,
|
| 326 |
+
"url": null
|
| 327 |
+
}
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"27": {
|
| 331 |
+
"title": "Algorithms for verifying deep neural networks.",
|
| 332 |
+
"author": "C. Liu, T. Arnon, C. Lazarus, C. Strong, C. Barrett, and M. Kochenderfe.",
|
| 333 |
+
"venue": "Foundations and Trends\u00ae in Optimization,\n3-4:244\u2013404, 2020.",
|
| 334 |
+
"url": null
|
| 335 |
+
}
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"28": {
|
| 339 |
+
"title": "Towards verifying robustness of neural networks against a family of\nsemantic perturbations.",
|
| 340 |
+
"author": "J. Mohapatra, T. Weng, P. Chen, S. Liu, and L. Daniel.",
|
| 341 |
+
"venue": "In CVPR20, pages 244\u2013252, 2020.",
|
| 342 |
+
"url": null
|
| 343 |
+
}
|
| 344 |
+
},
|
| 345 |
+
{
|
| 346 |
+
"29": {
|
| 347 |
+
"title": "Towards practical verification of machine learning: The case of\ncomputer vision systems.",
|
| 348 |
+
"author": "K. Pei, Y. Cao, S. Yang, and S. Jana.",
|
| 349 |
+
"venue": "CoRR, abs/1712.01785, 2017.",
|
| 350 |
+
"url": null
|
| 351 |
+
}
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"30": {
|
| 355 |
+
"title": "Semidefinite relaxations for certifying robustness to adversarial\nexamples.",
|
| 356 |
+
"author": "A. Raghunathan, J. Steinhardt, and P. Liang.",
|
| 357 |
+
"venue": "In NeurIPS18, 2018.",
|
| 358 |
+
"url": null
|
| 359 |
+
}
|
| 360 |
+
},
|
| 361 |
+
{
|
| 362 |
+
"31": {
|
| 363 |
+
"title": "A convex relaxation barrier to tight robustness verification of\nneural networks.",
|
| 364 |
+
"author": "H. Salman, G. Yang, H. Zhang, C.-J. Hsieh, and P. Zhang.",
|
| 365 |
+
"venue": "In Neural Information Processing Systems, 2019.",
|
| 366 |
+
"url": null
|
| 367 |
+
}
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"32": {
|
| 371 |
+
"title": "Fast and effective robustness certification.",
|
| 372 |
+
"author": "G. Singh, T. Gehr, M. Mirman, M. P\u00fcschel, and M. Vechev.",
|
| 373 |
+
"venue": "In Advances in Neural Information Processing Systems, 2018.",
|
| 374 |
+
"url": null
|
| 375 |
+
}
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"33": {
|
| 379 |
+
"title": "An abstract domain for certifying neural networks.",
|
| 380 |
+
"author": "G. Singh, T. Gehr, M. P\u00fcschel, and M. Vechev.",
|
| 381 |
+
"venue": "In PACMPL19, volume 3, pages 1\u201330, 2019a.",
|
| 382 |
+
"url": null
|
| 383 |
+
}
|
| 384 |
+
},
|
| 385 |
+
{
|
| 386 |
+
"34": {
|
| 387 |
+
"title": "An abstract domain for certifying neural networks.",
|
| 388 |
+
"author": "G. Singh, T. Gehr, M. P\u00fcschel, and M. Vechev.",
|
| 389 |
+
"venue": "Proc. ACM Program. Lang., 2019b.",
|
| 390 |
+
"url": null
|
| 391 |
+
}
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"35": {
|
| 395 |
+
"title": "The convex relaxation barrier, revisited: Tightened single-neuron\nrelaxations for neural network verification.",
|
| 396 |
+
"author": "C. Tjandraatmadja, R. Anderson, J. Huchette, W. Ma, K. PATEL, and J. Vielma.",
|
| 397 |
+
"venue": "NeurIPS20, 2020.",
|
| 398 |
+
"url": null
|
| 399 |
+
}
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"36": {
|
| 403 |
+
"title": "Evaluating robustness of neural networks with mixed integer\nprogramming.",
|
| 404 |
+
"author": "V. Tjeng, K. Xiao, and R. Tedrake.",
|
| 405 |
+
"venue": "arXiv preprint arXiv:1711.07356, 2017.",
|
| 406 |
+
"url": null
|
| 407 |
+
}
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"37": {
|
| 411 |
+
"title": "Verification of deep convolutional neural networks using imagestars.",
|
| 412 |
+
"author": "H. Tran, S. Bak, W. Xiang, and T. Johnson.",
|
| 413 |
+
"venue": "In International Conference on Computer Aided Verification,\npages 18\u201342. Springer, 2020.",
|
| 414 |
+
"url": null
|
| 415 |
+
}
|
| 416 |
+
},
|
| 417 |
+
{
|
| 418 |
+
"38": {
|
| 419 |
+
"title": "Vefication of neural networks competition.",
|
| 420 |
+
"author": "VNN-COMP.",
|
| 421 |
+
"venue": "https://sites.google.com/view/vnn20/vnncomp, 2020.",
|
| 422 |
+
"url": null
|
| 423 |
+
}
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"39": {
|
| 427 |
+
"title": "Beta-CROWN: Efficient bound propagation with per-neuron split\nconstraints for complete and incomplete neural network verification.",
|
| 428 |
+
"author": "S. Wang, H. Zhang, K. Xu, X. Lin, S. Jana, C.-J. Hsieh, and J. Z. Kolter.",
|
| 429 |
+
"venue": "In Neural Information Processing Systems, 2021.",
|
| 430 |
+
"url": null
|
| 431 |
+
}
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"40": {
|
| 435 |
+
"title": "Provable defenses against adversarial examples via the convex outer\nadversarial polytope.",
|
| 436 |
+
"author": "E. Wong and J. Kolter.",
|
| 437 |
+
"venue": "In ICML18, pages 5286\u20135295, 2018.",
|
| 438 |
+
"url": null
|
| 439 |
+
}
|
| 440 |
+
},
|
| 441 |
+
{
|
| 442 |
+
"41": {
|
| 443 |
+
"title": "Spatially transformed adversarial examples.",
|
| 444 |
+
"author": "C. Xiao, J. Zhu, B. Li, W. He, M. Liu, and D. Song.",
|
| 445 |
+
"venue": "arXiv preprint arXiv:1801.02612, 2018.",
|
| 446 |
+
"url": null
|
| 447 |
+
}
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"42": {
|
| 451 |
+
"title": "Fast and complete: Enabling complete neural network verification with\nrapid and massively parallel incomplete verifiers.",
|
| 452 |
+
"author": "K. Xu, H. Zhang, S. Wang, Y. Wang, S. Jana, X. Lin, and C.-J. Hsieh.",
|
| 453 |
+
"venue": "In International Conference on Learning Representations, 2021.",
|
| 454 |
+
"url": null
|
| 455 |
+
}
|
| 456 |
+
},
|
| 457 |
+
{
|
| 458 |
+
"43": {
|
| 459 |
+
"title": "Invariance-inducing regularization using worst-case transformations\nsuffices to boost accuracy and spatial robustness.",
|
| 460 |
+
"author": "F. Yang, Z. Wang, and C. Heinze-Deml.",
|
| 461 |
+
"venue": "CoRR, abs/1906.11235, 2019.",
|
| 462 |
+
"url": null
|
| 463 |
+
}
|
| 464 |
+
},
|
| 465 |
+
{
|
| 466 |
+
"44": {
|
| 467 |
+
"title": "Provable defense against geometric transformations.",
|
| 468 |
+
"author": "R. Yang, J. Laurel, S. Misailovic, and G. Singh.",
|
| 469 |
+
"venue": "In The Eleventh International Conference on Learning\nRepresentations, 2023.",
|
| 470 |
+
"url": null
|
| 471 |
+
}
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"45": {
|
| 475 |
+
"title": "Efficient neural network robustness certification with general\nactivation functions.",
|
| 476 |
+
"author": "H. Zhang, T.-W. Weng, P.-Y. Chen, C.-J. Hsieh, and L. Daniel.",
|
| 477 |
+
"venue": "In Neural Information Processing Systems, 2018.",
|
| 478 |
+
"url": null
|
| 479 |
+
}
|
| 480 |
+
},
|
| 481 |
+
{
|
| 482 |
+
"46": {
|
| 483 |
+
"title": "General cutting planes for bound-propagation-based neural network\nverification.",
|
| 484 |
+
"author": "H. Zhang, S. Wang, K. Xu, L. Li, B. Li, S. Jana, C.-J. Hsieh, and J. Z. Kolter.",
|
| 485 |
+
"venue": "In Neural Information Processing Systems, 2022.",
|
| 486 |
+
"url": null
|
| 487 |
+
}
|
| 488 |
+
}
|
| 489 |
+
],
|
| 490 |
+
"url": "http://arxiv.org/html/2408.13140v3"
|
| 491 |
+
}
|
20240921/2408.15020v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2409.06554v2.json
ADDED
|
@@ -0,0 +1,332 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Modelling Global Trade with Optimal Transport",
|
| 3 |
+
"abstract": "Global trade is shaped by a complex mix of factors beyond supply and demand, including tangible variables like transport costs and tariffs, as well as less quantifiable influences such as political and economic relations. Traditionally, economists model trade using gravity models, which rely on explicit covariates but often struggle to capture these subtler drivers of trade. In this work, we employ optimal transport and a deep neural network to learn a time-dependent cost function from data, without imposing a specific functional form. This approach consistently outperforms traditional gravity models in accuracy while providing natural uncertainty quantification. Applying our framework to global food and agricultural trade, we show that the global South suffered disproportionately from the war in Ukraine\u2019s impact on wheat markets. We also analyze the effects of free-trade agreements and trade disputes with China, as well as Brexit\u2019s impact on British trade with Europe, uncovering hidden patterns that trade volumes alone cannot reveal.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "[lines=2,lraise=0.05,findent=0.1em, nindent=0em]International trade serves as the backbone of the world economy, distributing goods and connecting markets through global logistics networks. Its dynamics are driven by numerous factors beyond mere supply and demand, such as tariffs, non-tariff policy barriers, political and economic tensions, and disruptions caused by accidents, conflicts, and civil wars. Among all traded commodities, agricultural and food products hold particular interest for policymakers and the general public due to their significant volume, high trade value, and critical role in food security and resilience [1 ###reference_b1###, 2 ###reference_b2###]. Consumer food prices are a product of all the complexly interwoven factors governing trade. However, they do not always directly reflect the ease of doing business between any two countries. For instance, in May 2020, China imposed an 80% tariff on Australian barley, leading to a major restructuring of global supply chains (see fig. LABEL:fig:Barley_trade): Chinese demand was suddenly met from France, Canada, and Argentina, while Australia started exporting surplus barley e.g. to Saudi Arabia. Despite these shifts, for the next five months the global barley price barely budged [3 ###reference_b3###, 4 ###reference_b4###].\nModelling global trade has garnered significant attention in the economic literature, with gravity models being the most widely used approach [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###]. These models, named for their direct analogy to Newton\u2019s law of gravity, assume that the total trade of a given commodity between two countries and is proportional to the total output of the source country and the total expenditure of the destination country, as well as inversely related to a \u2018distance\u2019 between them:\nThis distance comprises all factors that contribute to the ease of selling goods produced in one country to another, including transportation costs, information costs, and tariff and non-tariff barriers to trade. Traditional gravity models use a set of covariates to estimate as\nwhere and are exogenous exporter and importer-side regressors [10 ###reference_b10###], are bilateral covariates, and , , are the coefficient vectors. Commonly used covariates include geographic proximity, the existence of trade agreements, colonial ties, tariffs, non-tariff barriers, or shared languages [11 ###reference_b11###]. The structural gravity model corrects eq. (1 ###reference_###) with import and export multilateral resistance terms, which account for the relative nature of bilateral trade shares. This adjustment has been shown to align with various microeconomic models [7 ###reference_b7###]. Gravity models have been widely used to study agrifood trade. For instance, [12 ###reference_b12###] estimate residual trade costs based on a micro-founded gravity equation, finding ad valorem costs to be 60% higher in the global South compared to the North. Studies have also investigated the impact of global and regional trade agreements [13 ###reference_b13###, 14 ###reference_b14###] and the effect of eliminating tariffs [15 ###reference_b15###, 16 ###reference_b16###].\nThe gravity-based approach is attractive to researchers due to its interpretability, mathematical simplicity, and consistency with various microeconomic theories [9 ###reference_b9###]. However, it is not without its limitations. For one, multilateral trade resistance terms, central to the structural gravity model, are unobservable and must be estimated, often using fixed effects. Elasticity and other key parameters are often unavailable at a granular level, requiring aggregation that can introduce bias [17 ###reference_b17###]. The model\u2019s cost function also depends heavily on the choice of covariates and functional form, making specification crucial for interpreting results. In addition, unobservables\u2014such as the subtle effects of changing political relations, public preferences, or aversions toward products from specific countries\u2014are absorbed in the error term. Finally, while trade costs are generally asymmetric (), commonly used covariates are not, making it difficult for a model to capture the inherent imbalances in trade relationships. See [11 ###reference_b11###, 9 ###reference_b9###, 18 ###reference_b18###] for a deeper discussion of challenges and best practices.\nIn this work, we present a more general approach that dispenses with the use of covariates and a functional form, instead inferring the cost directly from data. Our method is based on the optimal transport (OT) framework [19 ###reference_b19###], which generalises gravity-based models. In OT, trade flows are assumed to match supply and demand to minimise an overall cost. Mathematically, this is expressed as follows: let be a matrix quantifying the \u2018cost\u2019 (in a general sense) of moving goods from country to . Given the supply vector and the demand vector , the optimal transport problem consists in finding a transport plan, i.e. a matrix with entries modelling the total volume (or value) of transport from country to , such that the total cost\nis minimised. In addition, the marginal constraints\nmust be satisfied, ensuring that demand and supply are met. It is advantageous to add a regularisation term term to the cost, as it ensures existence of a unique solution and significantly improves computational efficiency; the total cost then becomes\nwhere denotes the entropy of and is a regularisation parameter. It can be shown that the solution will then be of the form\nwhere and are diagonal scaling matrices which ensure that the marginal constraints hold (see Methods).\nAs described in [20 ###reference_b20###], gravity models can be reformulated as solutions of a regularised OT problem with an appropriate choice of parameters. While OT-based models might appear to suggest a centralised control of flows, its dual formulation admits an alternative, decentralised interpretation (see Methods). Indeed, the dual problem can be interpreted as importers seeking to minimise the cost of purchasing commodities and exporters seeking to maximise their profit. The solution at equilibrium coincides with the solution of the OT problem [21 ###reference_b21###], which in its classic form (3 ###reference_###)\u2013(4 ###reference_###) is well understood. This is less true for the corresponding inverse problem we are interested in, despite its mathematical and practical importance: given a (possibly noisy) observation of , and , this problem consists in inferring the underlying cost . As shown in [22 ###reference_b22###], maximum likelihood estimation of underlying gravity model parameters can again be reformulated as an inverse optimal transport problem.\nA Ukrainian wheat exports in metric tons, 2021 (left) and 2022\n\n\n\n\nC Change in trade volume and cost, selected countries\n\n\n: Figure 1Ukrainian wheat exports, 2021\u20132022. A Network of Ukrainian exports, 2021 and 2022. Shown are the largest trading partners, making up 99% of Ukrainian exports. The blue node represents the total Ukrainian export volume (in metric tons), the red nodes are the import volumes. Edge widths represent the flow volume. B The change in trade volume (left) and trade cost (right) for the largest trading partners. C Percent change in trade volume (left bar) and change in trade cost (right bar) for selected countries.\nA Ukrainian wheat exports in metric tons, 2021 (left) and 2022\n\n\n\n\nC Change in trade volume and cost, selected countries\n\n\n: Figure 1Ukrainian wheat exports, 2021\u20132022. A Network of Ukrainian exports, 2021 and 2022. Shown are the largest trading partners, making up 99% of Ukrainian exports. The blue node represents the total Ukrainian export volume (in metric tons), the red nodes are the import volumes. Edge widths represent the flow volume. B The change in trade volume (left) and trade cost (right) for the largest trading partners. C Percent change in trade volume (left bar) and change in trade cost (right bar) for selected countries.\n###figure_1### ###figure_2### ###figure_3### The inference methodology presented in this work is a novel deep learning approach to solve the inverse OT problem, based on recent work on neural parameter calibration [23 ###reference_b23###, 24 ###reference_b24###]. We assume no underlying covariate structure, but instead infer a general cost matrix , parametrized as a deep neural network, directly from data. We train a neural network to recognise cost matrices from observations of transport plans for the global food and agricultural trade from 2000\u20132022 (the \u2018training data\u2019) by constraining it to satisfy eq. (6 ###reference_###). Put simply, this means fitting the mathematical optimal transport equation to the data in such a way that the predicted cost matrices reproduce the observations . The trained neural network then solves the inverse problem\non the observations. Though its ability to extrapolate to new observations depends on the amount of training data, its performance on the training data itself does not. A probability density on the estimates is then naturally obtained by \u2018pushing\u2019 the uncertainty on through , i.e.\n(see Methods). As we demonstrate, this approach produces trade flow estimates that are orders of magnitude more accurate than those of a covariate-based gravity model.\nThe dataset under consideration was assembled by the Food and Agricultural Organisation of the United Nations (FAO), which provides global trade matrices for over 500 products on its portal111https://www.fao.org/faostat/en/#home ###reference_### [25 ###reference_b25###]. Though extensive, many entries in the trade matrices are missing. Furthermore, the FAO reports two values for each bilateral flow : one reported by the exporter, and one reported by the importer. There is often a considerable discrepancy between the two, due to a multitude of epistemic factors the FAO lists in its accompanying report222https://files-faostat.fao.org/production/TM/TM_e.pdf ###reference_M/TM_e.pdf###. The uncertainty on our estimates naturally follows the uncertainty on the FAO data, without presupposing an underlying statistical model.\nWe apply our method to analyse global commodity flows from 2000\u20132022, examining the impacts of events, conflicts, trade agreements, and political changes on trade. The cost matrix uncovers economic effects that are not evident in trade volumes or retail prices alone. The article begins with a study of the war in Ukraine\u2019s impact on global wheat trade, followed by an analysis of free trade agreements and disputes in the Asia-Pacific, as well as the United Kingdom\u2019s 2016 exit from the European Union (Brexit). Finally, we compare our approach to a traditional gravity model, demonstrating its superior performance in both prediction accuracy and uncertainty."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Results",
|
| 15 |
+
"text": "A Russian wheat exports in metric tons, 2021 (left) and 2022\n\n\n\n\nC Change in trade volume and cost, selected countries\n\n\n: Figure 2The same plots as in figure LABEL:fig:Ukraine_wheat with Russia as the exporting partner.\nA Russian wheat exports in metric tons, 2021 (left) and 2022\n\n\n\n\nC Change in trade volume and cost, selected countries\n\n\n: Figure 2The same plots as in figure LABEL:fig:Ukraine_wheat with Russia as the exporting partner.\n###figure_4### ###figure_5### ###figure_6###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Case study I: the impact of the Ukrainian war on wheat trade",
|
| 21 |
+
"text": "The Russian Federation\u2019s invasion of Ukraine in early 2022 sent shock waves through global food markets [26 ###reference_b26###]. Russia and Ukraine are two of the largest exporters of wheat, together accounting for almost 28% of global wheat exports in 2020. The blockade of trading routes through the Black Sea and the closure or destruction of ports in Mykolaiv and Kherson meant a drop in trade to the overwhelming majority of Ukraine\u2019s export destinations, in some cases by as much as 100% (fig. LABEL:fig:Ukraine_wheatA\u2013B). An increase of wheat exports only occurred to Europe, most significantly to Poland, Spain, Slovakia, and Romania, as well as slight increases to Algeria, India, and T\u00fcrkiye. However, our analysis shows that, although trade shrank across the globe, the accompanying increase in trade costs disproportionately affected the global South, in particular African nations. Of the ten countries with the largest rise in import costs, four are in Africa, and all are in the global South, while of the ten countries with the largest decrease in trade barriers with Ukraine, seven are in Europe. Countries such as Tanzania or Tunisia, while experiencing a similar drop in trade as the US or France, simultaneously saw an increase in their trade costs. Canadian imports fell by 75%, yet trade utility remained constant, while similar drops in Syria or Egypt led to marked increases in trading barriers. European countries saw an average -0.22 point drop in trade barriers with Ukraine, while the African continent saw an average 0.03 point increase. Imports of wheat from Russia also fell globally (fig. LABEL:fig:Russia_wheat), again affecting Africa particularly severely. European imports of Russian wheat fell by around 40% with a 0.05 point increase in trade costs; African imports fell by on average 71% with a 0.27 point increase in trade costs. While many European countries saw their imports of Ukrainian wheat rise, Russian imports fell sharply. Two notable exceptions in our model are the United Kingdom and Netherlands, which saw increases in trading costs with Ukraine of 0.22 and 0.32 respectively. The two largest hubs for Russian wheat, Egypt and T\u00fcrkiye, saw no change in their import volumes or import barriers. Meanwhile, Iran saw a 0.6 point increase in trade barriers, leading to a 98% percent decline in Ukrainian wheat imports. For Russian wheat, the estimated increase in trade barriers was only 0.05, leading to a drop in imports of 46%. Russian-Iranian trade barriers were thus not markedly affected by the war, despite a drop in trade volumes.\nA Export of sugar products\u2020 to China, selected countries\n\n\nB Australian exports to China, selected commodities\n\n\n: Figure 3Trade with China. A Export of sugar products to China. Top row: estimated trade volume (light blue) in metric tons, as well as the reported values. Bottom row: estimated cost, together with the ASEAN and non-ASEAN averages. B Australian exports to China, selected commodities. Top row: model estimated flow and FAO data; bottom row: estimated cost. Indicated are the signing of ChAFTA (2015, green dotted line) as well as the start of the US-China and Australia-China trade disputes (2018 and 2020, red dotted lines). Errorbands indicate one standard deviation. \n\u2020Sugar products comprise: sugar, refined sugar, syrups, fructose, sugar confectionery. \u22c6Dairy products comprise: butter, skim milk of cows, cheese, other dairy products.\nA Export of sugar products\u2020 to China, selected countries\n\n\nB Australian exports to China, selected commodities\n\n\n: Figure 3Trade with China. A Export of sugar products to China. Top row: estimated trade volume (light blue) in metric tons, as well as the reported values. Bottom row: estimated cost, together with the ASEAN and non-ASEAN averages. B Australian exports to China, selected commodities. Top row: model estimated flow and FAO data; bottom row: estimated cost. Indicated are the signing of ChAFTA (2015, green dotted line) as well as the start of the US-China and Australia-China trade disputes (2018 and 2020, red dotted lines). Errorbands indicate one standard deviation. \n\u2020Sugar products comprise: sugar, refined sugar, syrups, fructose, sugar confectionery. \u22c6Dairy products comprise: butter, skim milk of cows, cheese, other dairy products.\n###figure_7### ###figure_8### A Global barley trade, 2018 (left) and 2021\n\n\nB Barley export cost to China, 2018 (left) and 2021\n\n\n\n\n: Figure 4Global barley trade between 2015\u20132022. After the introduction of Chinese import tariffs on Australian barley in May 2020, the entire supply chain restructured itself, with Chinese demand being supplied from France, Canada, and Ukraine, and Australia increasingly exporting to Saudi Arabia and Southeast Asia. A\u2013B Trade in in metric tons, 2018 and 2021. Import values are shown in red, export values in blue. C Model estimated trade volumes (top row) cost (bottom row) for selected countries. Dotted lines indicate the start of the US-China and Australia-China trade wars.\nA Global barley trade, 2018 (left) and 2021\n\n\nB Barley export cost to China, 2018 (left) and 2021\n\n\n\n\n: Figure 4Global barley trade between 2015\u20132022. After the introduction of Chinese import tariffs on Australian barley in May 2020, the entire supply chain restructured itself, with Chinese demand being supplied from France, Canada, and Ukraine, and Australia increasingly exporting to Saudi Arabia and Southeast Asia. A\u2013B Trade in in metric tons, 2018 and 2021. Import values are shown in red, export values in blue. C Model estimated trade volumes (top row) cost (bottom row) for selected countries. Dotted lines indicate the start of the US-China and Australia-China trade wars.\n###figure_9### ###figure_10### ###figure_11### A Global soya bean trade, 2016 (left) and 2018\n\n\n\n\n\n\n: Figure 5Global soya bean trade. A In 2018, the Chinese government raised import tariffs on American soya beans in a retaliatory action against US trade restrictions. The shortfall was met by imports from Brazil. B Soya bean yield in 100 g/hectare. Argentina in 2018 experience a major drop in yields, leading to an increase in exports from the US C Predicted trade volumes in metric tons (top row) and predicted cost (bottom row).\n###figure_12### ###figure_13### ###figure_14### : Figure 6Change in UK and Ireland (ROI) imports, 2016\u20132022. For each exporting country, the left two bars indicate the percent change in trade volume between 2016 and 2022 for the UK and the ROI respectively, the right two bars the change in import costs. A Vegetable imports. Top row: lettuce (including chicory), and other fresh vegetables; middle row: tomatoes; bottom row: cucumbers and gherkins. B Wine imports. \n\u2217Exporter for cucumbers is Greece.\n: Figure 6Change in UK and Ireland (ROI) imports, 2016\u20132022. For each exporting country, the left two bars indicate the percent change in trade volume between 2016 and 2022 for the UK and the ROI respectively, the right two bars the change in import costs. A Vegetable imports. Top row: lettuce (including chicory), and other fresh vegetables; middle row: tomatoes; bottom row: cucumbers and gherkins. B Wine imports. \n\u2217Exporter for cucumbers is Greece.\n###figure_15### ###figure_16###"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Case study II: Trade in Southeast Asia and Asia-Pacific",
|
| 27 |
+
"text": "A series of free-trade agreements came into effect in Southeast Asia and the Asia-Pacific region in the 2000s and 2010s, significantly among them the China-Australia Free Trade Agreement (ChAFTA) in 2015, the ASEAN-China free trade agreement (ACFTA, gradually entering into force from 2003) and the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) between 11 counties bordering the Pacific Ocean (2018) [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###]. Together with China\u2019s accession to the WTO in 2001 and its rapid economic growth, these trade agreements coincide with some of the largest increases in trade flows in recent history. In figure LABEL:fig:China_tradeA, we show trade flow of sugar and sugar products from Thailand, Malaysia, and India to China, as well the estimated costs. In our model, the cost of importing sugar from Thailand fell consistently from 2000\u20132022, following a general trend for ASEAN countries (bottom row, green line) which commenced around 2005. Indian exports, by comparison, remained relatively low up until 2015, when Indian prime minister Narendra Modi visited China, and top officials from both sides agreed to increase bilateral trade to US$100 billion by the end of the year. This visit marked a dramatic shift in Indo-Chinese trade, as exemplified by the huge increase of sugar trade. From 2015\u20132018, export cost from India dropped sharply by 33%, precipitating a steep increase in trade starting in 2018. By contrast, trade cost from non-ASEAN members has remained constant over the past twenty years (red line, fig. LABEL:fig:China_tradeA).\nThe PRC is one of Australia\u2019s largest export markets for food and agricultural products. Our analysis shows a precipitous reduction in trade barriers for Australian exports since China\u2019s accession to the WTO in 2001 (see fig. LABEL:fig:China_tradeB), particularly for beef, wheat, wine, and sugar. Between 2002 and 2010, these commodities saw a 30\u201380% drop in trade barriers. Our estimates indicate that ChAFTA had little effect on Australian trade costs, since it succeeded a period of deepening ties. Dairy barriers, for instance, had already fallen from 0.35 to 0.14 from 2000 to 2015, thereafter falling a further 0.03 points until 2020. Wine exports too saw their largest reductions in trade barriers between 2000 and 2010, only experiencing a 0.04 drop from 2015 to 2018 compared to the 0.59 point reduction from 2000\u20132015.\nIn January 2018, the Trump administration started imposing import tariffs on goods primarily from China. In response, the Chinese government increased tariffs on a variety of products, including agricultural imports. The largest agricultural export from the US to China, soya beans, were hit with a 25% import tariff [30 ###reference_b30###]. Meanwhile, political tensions between China and Australia caused Beijing to introduce high anti-dumping tariffs on Australian exports such as barley (80.5%) and wine (206%), starting in 2020 [31 ###reference_b31###]. Wine trade had previously been tariff-free since the signing of ChAFTA in 2015. Our analysis provides an estimate of the change in the ease of trading these measures induced (figs. LABEL:fig:China_tradeB and LABEL:fig:Barley_trade). Australian beef, wine and barley imports all experience large increases in cost, following the implosion of trade volumes. Australia was able to divert some of its excess barley supply to Saudi Arabia, which saw a decrease in trade barriers of over 0.5 points between 2019 and 2022 (fig. LABEL:fig:Barley_tradeC). Trade volumes to Vietnam also increased from 200,000 to 800,000 metric tons, though trade costs remained approximately constant. Meanwhile, after 2020 China doubled its barley imports from Canada and France. We found that import barriers from both countries were reduced slightly in 2021, though they increased again in the year after.\n: Figure 7Comparison with gravity model. A\u2013B Comparison plot of the OT and gravity estimates (-axis) versus the true data (-axis) on two selected commodities. Also shown is a linear fit (dotted line), its estimated slope , the Pearson coefficient of the fit , and the line (solid line). See fig. LABEL:fig:Gravity_comparisons_all in the appendix for an overview of all commodities. C\u2013D Comparison of the RMSE accuracies of the estimated transport volumes of the OT approach (blue) and the gravity model (orange). Values are averaged over all countries and years, with the errorbars showing one standard deviation from the mean (triangular marker). Also shown are the median values (diamond markers). Shown are the RMSE (left) and the RMSE in units of the standard deviation on the true data (right).\n: Figure 7Comparison with gravity model. A\u2013B Comparison plot of the OT and gravity estimates (-axis) versus the true data (-axis) on two selected commodities. Also shown is a linear fit (dotted line), its estimated slope , the Pearson coefficient of the fit , and the line (solid line). See fig. LABEL:fig:Gravity_comparisons_all in the appendix for an overview of all commodities. C\u2013D Comparison of the RMSE accuracies of the estimated transport volumes of the OT approach (blue) and the gravity model (orange). Values are averaged over all countries and years, with the errorbars showing one standard deviation from the mean (triangular marker). Also shown are the median values (diamond markers). Shown are the RMSE (left) and the RMSE in units of the standard deviation on the true data (right).\n###figure_17### ###figure_18### ###figure_19###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Case study III: Brexit",
|
| 33 |
+
"text": "In 2016, the United Kingdom voted to leave the European Union, officially exiting the common market and customs union on December 31, 2020. This case study examines the impact of Brexit on British import patterns by comparing vegetable and wine imports from mainland Europe to both the United Kingdom and the Republic of Ireland (ROI), which remains part of the Eurozone and the common market. While both island nations naturally source the majority of their fresh produce from mainland Europe, their trading patterns have evolved in markedly different ways. Imports of lettuce from Europe generally fell for the UK, accompanied by a rise in import cost: \u201344% trade volume and +0.11 in import costs from the Netherlands, the largest exporter of lettuce and chicory to the UK, as well as a \u201321% drop in trade from Spain, though with no change in import cost. Ireland increased its imports of lettuce and other greens from the Netherlands, Spain, Italy, and Portugal, accompanied by a general decrease in trading costs. In the case of the Netherlands, Ireland saw a consistent reduction in vegetable trade costs, unlike the United Kingdom. It is interesting to note that the United Kingdom significantly increased its imports of vegetables from Morocco, accompanied by a precipitous drop in trade costs, indicating a facilitation of trade between the two countries in the wake of Brexit. This is not true for the ROI: though it increased its imports of Moroccan tomatoes by nearly twice as much as the UK, Irish trade costs still fell by less than for the UK.\nA more clear-cut trend emerges in the wine trade (fig. LABEL:fig:BrexitB): here, the UK was consistently affected more negatively than the Republic of Ireland: British import costs from all eight countries considered rose by considerably more than those of the ROI. An 11% drop in Spanish wine import was accompanied by a 0.09 point increase in trading costs, while a \u20138% change in Irish imports was nonetheless accompanied by a 0.01 point decrease in costs. Portugese wine imports to the UK rose by 24%, notwithstanding a 0.07 increase in trade costs. The picture is unchanged for South African, Australian, and New Zealand imports. The EU maintains free trade or regulatory agreements removing wine import duties with the former two [32 ###reference_b32###, 33 ###reference_b33###]. When the UK left the European Union, wine from Australia entered at the UK Global Tariff rate, which in mid-2023 was eliminated under the Australia-United Kingdom FTA [34 ###reference_b34###]. South African wines, by contrast, continued to be imported to the UK tariff-free post-Brexit [35 ###reference_b35###]. Yet here too, the United Kingdom\u2019s 5% decrease in imports was driven by a 0.07 increase in trading costs; Ireland, by contrast, imported an estimated 61% less wine, driven by a comparable 0.1 point increase in trading costs."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "Comparison with Gravity model",
|
| 39 |
+
"text": "Lastly, we compare the performance of our method with a standard gravity model [11 ###reference_b11###, 36 ###reference_b36###, 37 ###reference_b37###, 38 ###reference_b38###], as specified in equations (1 ###reference_###)\u2013(2 ###reference_###). The covariates include geographic distance, shared borders, colonial ties, common language, regional trade agreements, tariffs, and importer/exporter fixed effects to account for multilateral resistance (see Supplementary Information for details). We estimate the coefficients using Poisson Pseudo Maximum Likelihood estimation and compare the accuracy of the estimated transport plans . Figures LABEL:fig:Gravity_compA\u2013B show scatter plots of the OT (blue) and gravity (orange) estimates against the FAO data. For all commodities studied, a linear fit through the OT estimates yields a near-perfect slope of with a Pearson coefficient close to 1, perfectly fitting the tail end of the distribution. In contrast, the gravity model\u2019s performance is much more volatile, with linear fits ranging from a Pearson coefficient of between 0.998 (best) to 0.747 (worst) (see also Supplementary Information). Due to model misspecification, the fits to the tails of the distributions are generally significantly poorer. Consequently, figure LABEL:fig:Gravity_compC shows that the OT approach significantly outperforms the gravity model in terms of RMSE, often by two to three orders of magnitude. Figure LABEL:fig:Gravity_compD illustrates that OT estimates typically fall within one standard deviation of the data uncertainty, whereas gravity estimates tend to range from one to two, at times even three to four standard deviations. The gravity model also exhibits much higher variance in accuracy compared to OT."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Discussion",
|
| 45 |
+
"text": "This paper introduces a novel and versatile approach for identifying the drivers and barriers of global commodity trades. Using optimal transport theory, we are able to obtain a cost structure that is more expressive than a covariate-based gravity approach. Our estimates are thus orders of magnitude more accurate than the current state of the art, while providing consistent accuracy across datasets. The optimal transport approach models trade networks as a dynamical, interconnected system, allowing to capture complex rearrangements and network response dynamics to e.g. trade wars, conflicts, or shifts in political relations. By contrast, the covariate-based gravity approach fits each node in the network individually without taking interaction effects into account. Though the current work looks only at global agrifood markets, the methodology proposed is general and applicable to commodity flows, financial markets, or banking networks [39 ###reference_b39###]. Beyond economics, the optimal transport approach also relates e.g. to global migration flows, which can be estimated from stock data [40 ###reference_b40###, 41 ###reference_b41###]."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Method",
|
| 51 |
+
"text": ""
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Entropy-regularised Optimal Transport",
|
| 57 |
+
"text": "In OT one wishes to find the optimal flow of mass from a source distribution to a target distribution, while minimising an overall transport cost. This abstract problem has a wide range of applications in economics, logistics, image restoration, transport systems, or urban structure [21 ###reference_b21###, 42 ###reference_b42###, 43 ###reference_b43###].\nConsider an -dimensional space , an -dimensional space , and furthermore a non-negative measure on . The entries of correspond to the cost of transporting mass from one location in to a target in . Given two probability measures and (the supply and demand), the OT problem consists in finding a transport plan minimising the overall cost eq. (3 ###reference_###). The transport plan must also satisfy the marginal constraints\nIn practice one usually solves the entropy regularised OT formulation, which can be solved much more efficiently [44 ###reference_b44###]; here, an additional term is added to the objective:\nwhere is a positive regularisation parameter. This regularisation prevents monopolisation, i.e. demand being supplied from only a few sources.\nThis constrained optimisation problem eq. (10 ###reference_0###) can be solved by considering the Lagrangian\nwith and Lagrangian multipliers. Minimising with respect to gives the solution\nor\nwhere and are diagonal matrices of Lagrangian multipliers.\nFinding and is achieved through an iterative procedure that is variously called Iterative Proportionate Procedural Fitting (IPFP), RAS, or Sinkhorn\u2019s algorithm [45 ###reference_b45###, 44 ###reference_b44###, 46 ###reference_b46###]. Define ; then, given an initial guess , we update to satisfy the first marginal constraint eq. (9 ###reference_###)\nSolving for gives\nwhere the division is understood element-wise. Similarly, we obtain the next update for as\nand so on.\nThe algorithm can thus be summarised as follows:\nUnder certain conditions, convergence of the algorithm to a unique solution is guaranteed [47 ###reference_b47###, 48 ###reference_b48###]. Note the important fact that the solution is invariant under scaling of the cost matrix, since the Lagrangian multipliers absorb the scaling; the transport plan thus does not depend on absolute cost values, only on their relative proportions.\nThe classic OT problem eq. (6 ###reference_###) can be interpreted as the central\u2019s planners problem of finding the optimal assignment/matching of supplies and demands. The dual OT problem is given by\nsuch that and satisfy\nwhere . Here and can be interpreted as the minimal cost of picking up and dropping off a good at locations respectively. The central planner problem of finding the best plan is therefore split into determining the optimal cost of collecting and delivering goods. The constraint (18 ###reference_8###) ensures optimality. If , that is the cost of picking up a good at location and dropping it off at location is larger than the transportation cost, it can not be optimal."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Neural Inverse Optimal Transport",
|
| 63 |
+
"text": "To infer the cost matrix function from a dataset of transport plan observations , we build on the neural parameter estimation method first introduced in [23 ###reference_b23###] and subsequently expanded upon [24 ###reference_b24###]. We wish to train a neural network to solve the inverse OT problem . We do so by constructing a loss function that differentiates through the optimal transport equations, i.e.\nHere, is the estimated transport plan obtained by solving Sinkhorn\u2019s algorithm alg. [1 ###reference_###]. The second summand together with the restriction that is necessary to fix the cost matrix since the OT problem is in general invariant under affine transformations of . Using this approach, the quality of the prediction does not depend on the number of datasets used to train the neural network, since we are not performing regression, but rather fitting a mathematical model (or a set of parameters) to data. The data is processed in batches, and a gradient descent step performed on the neural network parameters after each batch. The loss is only calculated for links with trade flow .\nAs mentioned, the FAO dataset contains two values for each entry : one reported by the exporter, and one by the importer. Let by the transport plan where all entries are those reported by the exporters, and those where all are reported by the importers. The training data\u2014i.e., the data we use to train the function \u2014consists of only these two transport plans for each year: , giving a total training set size of , where are the number of observation points. A hyperparameter sweep showed that using a deep neural network with 5 layers, 60 nodes per layer, and hyperbolic tangent activation functions on all layers but the last, where we use a sigmoid, gives best results. Using a sigmoid activation function on the last layer ensures . We use the Adam optimizer [49 ###reference_b49###] to train the neural network. We pool all FAO trade matrices to only contain those countries that account for 99% of import and export volumes, subsuming all other countries in an \u2018Other\u2019 category (thereby ensuring that no flow is lost). Missing entries in the training data are masked and do not contribute to the loss function.\nUncertainty quantification on the estimated cost matrix is obtained by passing random samples of through the trained neural network . These samples are obtained by selecting either or uniformly at random for each entry of the transport plan, and passing this sample through the neural network. Repeating this times gives samples of , and inserting each estimate of into Sinkhorn\u2019s algorithm gives estimated transport plans . We generate samples for each year."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.3",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Code and data availability",
|
| 69 |
+
"text": "All code and data is available at https://github.com/ThGaskin/inverse-optimal-transport ###reference_l-transport###. Instructions for running the model are given in the README."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Acknowledgements",
|
| 75 |
+
"text": "TG was funded by the University of Cambridge School of Physical Sciences VC Award via DAMTP and the Department of Engineering, and supported by EPSRC grant EP/X010503/1. AD and MTW acknowledge partial support by the EPSRC grant EP/X010503/1\n."
|
| 76 |
+
}
|
| 77 |
+
],
|
| 78 |
+
"appendix": [
|
| 79 |
+
{
|
| 80 |
+
"section_id": "Appendix x1",
|
| 81 |
+
"parent_section_id": null,
|
| 82 |
+
"section_name": "Comparison with Gravity model",
|
| 83 |
+
"text": "We consider the following gravity model for our comparison study [11 ###reference_b11###]:\nThe covariates 5\u20139 are taken from the CEPII database [36 ###reference_b36###]:\nare the time-dependent exporter-fixed effects,\nare the time-dependent importer-fixed effects,\nis the total production output of the exporter, in tonnes, as given by the FAO.\nis the total consumption of the importer, in tonnes,\nis the geodesic distance in km between the population centres of each country (harmonic average) (distw_harmonic),\nCNTG indicates whether the two share a land border (contig),\nCNLY is a binary variable indicating whether there ever existed colonial ties before 1948 between the two trading partners (col_dep_ever),\nLANG indicates whether the two share an official or primary language (comlang_off),\nRTA is a binary variable indicating whether there exists a bilateral regional trade agreement (rta_coverage, where the variable is 0 for rta_coverage == 0 and 1 else),\nis the remoteness index of the importer,\nTRFF is the tariff applied by the importer in the absence of a trade agreement. We use the most favoured nation tariff (maximum duty) as given by the WTO [38 ###reference_b38###]: MFN - Maximum duty by product groups.\nThe remoteness index accounts for the multilateral resistances [11 ###reference_b11###]. This gives a -dimensional regression problem for each commodity, where are the number of years in the dataset (the fixed-effects are time-dependent). We estimate the parameters using Poisson Pseudo Maximum Likelihood optimisation. Table LABEL:tab:Gravity_parameters gives the estimated parameters for each commodity. Figure LABEL:fig:Gravity_comparisons_all plots the estimated values against the reporter-averaged FAOStat values for both the OT and the gravity models. Also shown are a linear fit with slopes and Pearson coefficients indicated.\n\n\n###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### : Figure S1Comparison of the OT estimates (left, darkblue) and the Gravity estimates (orange) for each commodity. The -axis shows the true FAO value, while the -axis shows the estimated value. The solid line is the diagonal . Also shown are a linear fit (dashed line) as well the fitted slope and Pearson correlation of the fit.\n: Figure S1Comparison of the OT estimates (left, darkblue) and the Gravity estimates (orange) for each commodity. The -axis shows the true FAO value, while the -axis shows the estimated value. The solid line is the diagonal . Also shown are a linear fit (dashed line) as well the fitted slope and Pearson correlation of the fit.\n###figure_28### ###figure_29### ###figure_30###"
|
| 84 |
+
}
|
| 85 |
+
],
|
| 86 |
+
"tables": {},
|
| 87 |
+
"image_paths": {},
|
| 88 |
+
"validation": true,
|
| 89 |
+
"references": [
|
| 90 |
+
{
|
| 91 |
+
"1": {
|
| 92 |
+
"title": "Nature Food 1, 51\u201358 (2020).",
|
| 93 |
+
"author": "S Friel, A Schram, B Townsend, The nexus between international trade, food systems, malnutrition and climate change.",
|
| 94 |
+
"venue": null,
|
| 95 |
+
"url": null
|
| 96 |
+
}
|
| 97 |
+
},
|
| 98 |
+
{
|
| 99 |
+
"2": {
|
| 100 |
+
"title": "Nature Food 4, 22\u201329 (2023).",
|
| 101 |
+
"author": "A Wood, et al., Reframing the local\u2013global food systems debate through a resilience lens.",
|
| 102 |
+
"venue": null,
|
| 103 |
+
"url": null
|
| 104 |
+
}
|
| 105 |
+
},
|
| 106 |
+
{
|
| 107 |
+
"3": {
|
| 108 |
+
"title": "Econometrica 70, 1741\u20131779 (2002).",
|
| 109 |
+
"author": "J Eaton, S Kortum, Technology, Geography, and Trade.",
|
| 110 |
+
"venue": null,
|
| 111 |
+
"url": null
|
| 112 |
+
}
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"4": {
|
| 116 |
+
"title": "American Economic Review 93, 170\u2013192 (2003).",
|
| 117 |
+
"author": "JE Anderson, E van Wincoop, Gravity with Gravitas: A Solution to the Border Puzzle.",
|
| 118 |
+
"venue": null,
|
| 119 |
+
"url": null
|
| 120 |
+
}
|
| 121 |
+
},
|
| 122 |
+
{
|
| 123 |
+
"5": {
|
| 124 |
+
"title": "American Economic Review 102, 94\u2013130 (2012).",
|
| 125 |
+
"author": "C Arkolakis, A Costinot, A Rodr\u00edguez-Clare, New Trade Models, Same Old Gains?",
|
| 126 |
+
"venue": null,
|
| 127 |
+
"url": null
|
| 128 |
+
}
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"6": {
|
| 132 |
+
"title": "(United Nations), (2017).",
|
| 133 |
+
"author": "YV Yotov, R Piermartini, JA Monteiro, M Larch, An Advanced Guide to Trade Policy Analysis.",
|
| 134 |
+
"venue": null,
|
| 135 |
+
"url": null
|
| 136 |
+
}
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"7": {
|
| 140 |
+
"title": "Journal of Agricultural Economics 60, 273\u2013297 (2009).",
|
| 141 |
+
"author": "A Olper, V Raimondi, Patterns and determinants of international trade costs in the food industry.",
|
| 142 |
+
"venue": null,
|
| 143 |
+
"url": null
|
| 144 |
+
}
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"8": {
|
| 148 |
+
"title": "Agricultural Economics 37, 93\u2013104 (2007).",
|
| 149 |
+
"author": "R Sarker, S Jayasinghe, Regional trade agreements and trade in agri-food products: evidence for the european union from gravity modeling using disaggregated data.",
|
| 150 |
+
"venue": null,
|
| 151 |
+
"url": null
|
| 152 |
+
}
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"9": {
|
| 156 |
+
"title": "The World Economy 39, 1812\u20131833 (2016).",
|
| 157 |
+
"author": "I Mujahid, M Kalkuhl, Do trade agreements increase food trade?",
|
| 158 |
+
"venue": null,
|
| 159 |
+
"url": null
|
| 160 |
+
}
|
| 161 |
+
},
|
| 162 |
+
{
|
| 163 |
+
"10": {
|
| 164 |
+
"title": "Journal of Agricultural Economics 62, 525\u2013550 (2011).",
|
| 165 |
+
"author": "V Raimondi, A Olper, Trade elasticity, gravity and trade liberalisation: Evidence from the food industry.",
|
| 166 |
+
"venue": null,
|
| 167 |
+
"url": null
|
| 168 |
+
}
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"11": {
|
| 172 |
+
"title": "Agricultural Economics 44, 141\u2013159 (2013).",
|
| 173 |
+
"author": "G Philippidis, H Resano-Ezcaray, AI Sanju\u00e1n-L\u00f3pez, Capturing zero-trade values in gravity equations of trade: an analysis of protectionism in agro-food sectors.",
|
| 174 |
+
"venue": null,
|
| 175 |
+
"url": null
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"12": {
|
| 180 |
+
"title": "The Review of Economics and Statistics 106, 1418\u20131426 (2024).",
|
| 181 |
+
"author": "H Breinlich, D Novy, JMC Santos Silva, Trade, Gravity, and Aggregation.",
|
| 182 |
+
"venue": null,
|
| 183 |
+
"url": null
|
| 184 |
+
}
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"13": {
|
| 188 |
+
"title": "SN Business & Economics 3, 1\u201343 (2023).",
|
| 189 |
+
"author": "L Capoani, Review of the gravity model: origins and critical analysis of its theoretical development.",
|
| 190 |
+
"venue": null,
|
| 191 |
+
"url": null
|
| 192 |
+
}
|
| 193 |
+
},
|
| 194 |
+
{
|
| 195 |
+
"14": {
|
| 196 |
+
"title": "(American Mathematical Soc.) Vol. 58, (2021).",
|
| 197 |
+
"author": "C Villani, Topics in optimal transportation.",
|
| 198 |
+
"venue": null,
|
| 199 |
+
"url": null
|
| 200 |
+
}
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"15": {
|
| 204 |
+
"title": "Transportation Research 1, 253\u2013269 (1967).",
|
| 205 |
+
"author": "A Wilson, A statistical theory of spatial distribution models.",
|
| 206 |
+
"venue": null,
|
| 207 |
+
"url": null
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"16": {
|
| 212 |
+
"title": "(Princeton University Press), 1 edition, (2016).",
|
| 213 |
+
"author": "A Galichon, Optimal Transport Methods in Economics.",
|
| 214 |
+
"venue": null,
|
| 215 |
+
"url": null
|
| 216 |
+
}
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"17": {
|
| 220 |
+
"title": "The Review of Economics and statistics 88, 641\u2013658 (2006).",
|
| 221 |
+
"author": "JS Silva, S Tenreyro, The log of gravity.",
|
| 222 |
+
"venue": null,
|
| 223 |
+
"url": null
|
| 224 |
+
}
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"18": {
|
| 228 |
+
"title": "PNAS 120 (2023).",
|
| 229 |
+
"author": "T Gaskin, GA Pavliotis, M Girolami, Neural parameter calibration for large-scale multi-agent models.",
|
| 230 |
+
"venue": null,
|
| 231 |
+
"url": null
|
| 232 |
+
}
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"19": {
|
| 236 |
+
"title": "PNAS Nexus 3, 63 (2024).",
|
| 237 |
+
"author": "T Gaskin, GA Pavliotis, M Girolami, Inferring networks from time series: A neural approach.",
|
| 238 |
+
"venue": null,
|
| 239 |
+
"url": null
|
| 240 |
+
}
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"20": {
|
| 244 |
+
"title": "Nature Food 4, 508\u2013517 (2023).",
|
| 245 |
+
"author": "M Laber, P Klimek, M Bruckner, L Yang, S Thurner, Shock propagation from the Russia\u2013Ukraine conflict on international multilayer food production network determines global food availability.",
|
| 246 |
+
"venue": null,
|
| 247 |
+
"url": null
|
| 248 |
+
}
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"21": {
|
| 252 |
+
"title": "Math. Finance 30, 3\u201346 (2020).",
|
| 253 |
+
"author": "K Giesecke, G Schwenkler, JA Sirignano, Inference for large financial systems.",
|
| 254 |
+
"venue": null,
|
| 255 |
+
"url": null
|
| 256 |
+
}
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"22": {
|
| 260 |
+
"title": "Proceedings of the National Academy of Sciences 116, 116\u2013122 (2018).",
|
| 261 |
+
"author": "JJ Azose, AE Raftery, Estimation of emigration, return migration, and transit migration between all pairs of countries.",
|
| 262 |
+
"venue": null,
|
| 263 |
+
"url": null
|
| 264 |
+
}
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"23": {
|
| 268 |
+
"title": "Mathematical Population Studies 7, 239\u2013278 (1999) PMID: 12295226.",
|
| 269 |
+
"author": "F Willekens, Modeling approaches to the indirect estimation of migration flows: From entropy to EM.",
|
| 270 |
+
"venue": null,
|
| 271 |
+
"url": null
|
| 272 |
+
}
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"24": {
|
| 276 |
+
"title": "Birkh\u00e4user, NY 55, 94 (2015).",
|
| 277 |
+
"author": "F Santambrogio, Optimal transport for applied mathematicians.",
|
| 278 |
+
"venue": null,
|
| 279 |
+
"url": null
|
| 280 |
+
}
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"25": {
|
| 284 |
+
"title": "Foundations and Trends\u00ae in Machine Learning 11, 355\u2013607 (2019).",
|
| 285 |
+
"author": "G Peyr\u00e9, M Cuturi, , et al., Computational optimal transport: With applications to data science.",
|
| 286 |
+
"venue": null,
|
| 287 |
+
"url": null
|
| 288 |
+
}
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"26": {
|
| 292 |
+
"title": "Advances in neural information processing systems 26 (2013).",
|
| 293 |
+
"author": "M Cuturi, Sinkhorn distances: Lightspeed computation of optimal transport.",
|
| 294 |
+
"venue": null,
|
| 295 |
+
"url": null
|
| 296 |
+
}
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"27": {
|
| 300 |
+
"title": "The Annals of Mathematical Statistics 11, 427\u2013444 (1940).",
|
| 301 |
+
"author": "WE Deming, FF Stephan, On a Least Squares Adjustment of a Sampled Frequency Table When the Expected Marginal Totals are Known.",
|
| 302 |
+
"venue": null,
|
| 303 |
+
"url": null
|
| 304 |
+
}
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"28": {
|
| 308 |
+
"title": "The Annals of Mathematical Statistics 35, 876\u2013879 (1964).",
|
| 309 |
+
"author": "R Sinkhorn, A Relationship Between Arbitrary Positive Matrices and Doubly Stochastic Matrices.",
|
| 310 |
+
"venue": null,
|
| 311 |
+
"url": null
|
| 312 |
+
}
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"29": {
|
| 316 |
+
"title": "Annals of Statistics 23 (1995).",
|
| 317 |
+
"author": "L Rueschendorf, Convergence of the Iterative Proportional Fitting Procedure.",
|
| 318 |
+
"venue": null,
|
| 319 |
+
"url": null
|
| 320 |
+
}
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"30": {
|
| 324 |
+
"title": "arXiv 1412.6980 [cs.LG] (2014).",
|
| 325 |
+
"author": "DP Kingma, J Ba, Adam: A Method for Stochastic Optimization.",
|
| 326 |
+
"venue": null,
|
| 327 |
+
"url": null
|
| 328 |
+
}
|
| 329 |
+
}
|
| 330 |
+
],
|
| 331 |
+
"url": "http://arxiv.org/html/2409.06554v2"
|
| 332 |
+
}
|
20240921/2409.07743v2.json
ADDED
|
@@ -0,0 +1,153 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "LOCKEY: A Novel Approach to Model Authentication and Deepfake Tracking",
|
| 3 |
+
"abstract": "This paper presents a novel approach to deter unauthorized deepfakes and enable user tracking in generative models, even when the user has full access to the model parameters, by integrating key-based model authentication with watermarking techniques. Our method involves providing users with model parameters accompanied by a unique, user-specific key. During inference, the model is conditioned upon the key along with the standard input. A valid key results in the expected output, while an invalid key triggers a degraded output, thereby enforcing key-based model authentication. For user tracking, the model embeds the user\u2019s unique key as a watermark within the generated content, facilitating the identification of the user\u2019s ID. We demonstrate the effectiveness of our approach on two types of models\u2014audio codecs and vocoders\u2014utilizing the SilentCipher watermarking method. Additionally, we assess the robustness of the embedded watermarks against various distortions, validating their reliability in various scenarios.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The potential misuse of deep learning-based generative models has garnered significant attention in recent years due to the substantial advancements in generative AI, which have produced outputs nearly indistinguishable from real data [5 ###reference_b5###, 1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 6 ###reference_b6###, 4 ###reference_b4###]. Such generated content, commonly referred to as deepfakes, can be exploited for malicious purposes. Consequently, there has been significant research aimed at detecting deepfakes, with most approaches relying on identifying discrepancies between the statistical distributions of generated samples and that of real data [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###]. However, as generative models continue to improve, these discrepancies diminish, rendering traditional detection methods less effective. This necessitates an active approach to track generated content, with watermarking emerging as a widely adopted solution [12 ###reference_b12###, 11 ###reference_b11###, 13 ###reference_b13###, 10 ###reference_b10###].\nIn cases where generative models are provided as a service, such as through a cloud-based platforms, the service provider can embed watermarks containing user-specific metadata into the generated output, facilitating the tracking and detection of deepfakes. However, when users have complete access to the model parameters, it becomes easier to circumvent the embedding of the user-id into the generated output via the watermarking process, posing a significant challenge to ensuring the integrity and traceability of generated content.\nTo address this issue, we propose a key-based authentication method designed to prevent users from bypassing the watermarking process. In our approach, the generative model produces a degraded output if an invalid key is provided, while a valid key results in the implicit embedding of the user\u2019s unique ID as a watermark in the generated output. We demonstrate the generalizability of our method on two classes of models, audio codecs and vocoders, utilizing SilentCipher [13 ###reference_b13###], a deep learning-based watermarking technique. Specifically, we employ the Encodec model [14 ###reference_b14###] for audio codecs and the HiFi-GAN model[15 ###reference_b15###] for vocoders. Although audio codecs and vocoders are not strictly generative models, our motivation for enabling key-authentication for them stems from the growing trend of generative AI models that operate in latent spaces and use latent decoders to convert the latent representations to the data domain outputs [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 6 ###reference_b6###].\nDemo samples for our proposed method can be found at 111https://mayank-git-hub-sony.github.io/model_authentication_demo/ ###reference_l_authentication_demo/###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related Works",
|
| 15 |
+
"text": "Previous works have focused on ensuring generative AI models produce signature watermarks that enable the identification of the model used to generate a sample [17 ###reference_b17###, 16 ###reference_b16###, 18 ###reference_b18###]. However, to the best of our knowledge, this is the first work to propose a key-based authentication mechanism in a white-box scenario, where users have access to both the model parameters and inference script, enabling the tracking of individual users.\nTraditional deepfake detection methods have primarily relied on passive approaches, such as training classifiers to distinguish between the distributions of generated and real samples [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###]. These approaches, however, face limitations due to diminishing differences between real and generated samples and the limited information that can be extracted from the classifiers. To address this, active methods like watermarking have been employed, increasing the capacity of the embedded message without being constrained by the indistinguishability of real and generated samples [12 ###reference_b12###, 11 ###reference_b11###, 13 ###reference_b13###]. While earlier watermarking techniques sufferent from perceptible noises, recent advancements have enabled high-capacity watermarks that remain imperceptible and are robust against various distortions [13 ###reference_b13###].\nThese advancements have enabled user-tracking in cloud-based scenarios, where the generative models\u2019 watermark user-specific signatures in the generated outputs. However, when models are available locally, users can easily bypass post-hoc watermarking process, making it difficult to track malicious activity.\nOur method addresses this challenge by combining key-based authentication with in-model watermarking, making it more difficult to bypass the watermarking techniques even in local environments.\n###figure_1###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Proposed Method",
|
| 21 |
+
"text": "To address the ease of bypassing post-hoc watermarking when users have access to model parameters, we propose an in-model watermarking technique. Unlike existing methods that embed a constant key, our approach enables user-specific watermarks by conditioning the model on the user\u2019s unique key. To prevent misuse, the model is trained to distinguish between real and fake keys, generating degraded output if a fake key is detected. An overview of our method during both training and inference is shown in Figure 1 ###reference_###.\nFirst, we uniformly sample a set of valid keys from the set of all possible keys . During training, a key is sampled from either or with equal probability. The key, , where , is projected to learnable embeddings, where is the key size and is the embedding dimensions, and fed to the key encoder . The output of , along with the input condition is fed to the trainable generator to get ."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "III-A Valid key losses",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.1.1",
|
| 31 |
+
"parent_section_id": "3.1",
|
| 32 |
+
"section_name": "III-A1 Ensuring the watermark is embedded",
|
| 33 |
+
"text": "As illustrated in Figure 1 ###reference_###, during training we feed the generated waveform to the frozen pretrained message decoder to get .\nTo ensure that the respective models learn to embed the key as a watermark, we apply the cross entropy loss between and if the key is sampled from as per the equation 1 ###reference_###. For the gradients to propagate to , must be differentiable."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1.2",
|
| 37 |
+
"parent_section_id": "3.1",
|
| 38 |
+
"section_name": "III-A2 Ensuring the perceptual quality of the model",
|
| 39 |
+
"text": "To ensure that the fine-tuned model does not have degradation due to the introduction of the watermarking loss, we introduce a perceptual loss when .\nThe perceptual loss is defined as the MSE loss between the watermarked output of the frozen pre-trained model and . We get by providing to and feeding the output of , along with , to the pre-trained message encoder . Please refer to the Figure 1 ###reference_### for the notations.\nOur initial experiments suggested that the perceptual loss does not succeed in removing the perceptual distortions.\nTo further improve upon it, we introduce the MSE loss between the log normalized magnitude spectrogram of and as per equation 2 ###reference_###."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-B Invalid key losses",
|
| 45 |
+
"text": "When the sampled key, , we minimize the negative MSE between and .\nFor stability, we introduce a curriculum learning method wherein the invalid loss is restricted to a lower bound, , which is increased as per the details mentioned in Section IV ###reference_###."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-C Key Verification Loss",
|
| 51 |
+
"text": "To make it easier for the model to distinguish the valid and invalid keys, we feed the output of to a two-layer fully connected neural network with ReLU activation which is trained with cross-entropy loss to output zero if and one if ."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.4",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-D Total Loss",
|
| 57 |
+
"text": "During training we combine the losses as given in equation 4 ###reference_###\nwhere , and , is an indicator function that equals 1 if and 0 if and is an indicator function that equals 0 if and 1 if .\n###figure_2### ###figure_3###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "IV Experiments",
|
| 63 |
+
"text": "We apply our key-based authentication method to two models: HiFi-GAN [15 ###reference_b15###], trained at 22.05 kHz, and Encodec 32 kHz [14 ###reference_b14###]. The input condition, , for HiFi-GAN model is MEL spectrogram, while for Encodec\u2019s decoder, it is latent codes. is composed of five alternating convolution and ReLU layers. The output of is added to the output of the second layer of for both HiFi-GAN and Encodec. We train the message encoder and message decoder based upon SilentCipher [13 ###reference_b13###], a deep learning based watermarking technique. Separate SilentCipher models are trained for a sampling rate of 22.05kHz and 32kHz. Unless otherwise stated, the models use a 16-bit key with 655 valid keys randomly selected from the possible 2^16 keys. Evaluations are conducted on six=second samples. For invalid loss, we employ curriculum training where the lower bound of the invalid loss is gradually decreased. In both HiFi-GAN and Encodec, we double after every 5000 iterations, starting from 0.005."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.1",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "IV-A Datasets",
|
| 69 |
+
"text": "For training the HiFi-GAN model we use the VCTK dataset [19 ###reference_b19###] compromising of 44 hours of speech data. For the Encodec model we use the MTG-Jamendo dataset [20 ###reference_b20###] which contains 55k full music audio tracks. The train, validation and testing set are split in the ratio 0.8:0.1:0.1. We process the data for HiFi-GAN by extracting the MEL spectrogram of the waveform with the size of the fourier transform being 1024, window length being 1024, hop size being 256 and nuumber of mels being 80. For Encodec, we process the waveform and extract the latent codes using Encodec\u2019s encoder."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "IV-B Training",
|
| 75 |
+
"text": "All our methods were fine-tuned using the Adam optimizer with a learning rate of 1e-4 for a total of 25k iterations for HiFi-GAN and 80k iterations for Encodec. The audio duration during training is fixed to 10 seconds. Although we don\u2019t introduce any distortions in the watermarked output during the training of the HiFi-GAN or Encodec models, we evaluate our models on various distortions like Gaussian noise, random equalization of frequency bands and audio compression algorithms. We iterate over the bit-rates 64kbps, 128kbps and 256 kbps across two compression method, MP3 and OGG."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Results",
|
| 81 |
+
"text": "###figure_4###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.1",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "Objective Results",
|
| 87 |
+
"text": "We evaluate the accuracy of the decoded watermarks when the model is conditioned on valid keys, following distortion of the encoded signal. The applied distortions include additive Gaussian noise at 40 dB (gaus), random band-limited equalization of 15 dB at 35 Hz, 200 Hz, 1000 Hz, and 4000 Hz (eq), 16-bit floating point quantization (quant), random resampling between 40% and 100% of the original sampling rate (resamp), time-jittering (time_jit), and MP3/OGG compression at 64, 128, and 256 kbps. Since there are no established baselines, we compare our method to post-hoc watermarking techniques.\nTable I ###reference_### summarizes the objective results of applying the SilentCipher (SC) watermarking technique to both real and reconstructed samples generated by pretrained models on the VCTK and MTG datasets. The results demonstrate the robustness of the SilentCipher model in withstanding various distortions, consistently achieving an average SDR exceeding 30 dB. For the VCTK dataset, we use SilentCipher trained at a sampling rate of 22.05 kHz, while for the MTG dataset, the model trained at a sampling rate of 32 kHz is utilized.\nTable II ###reference_### presents the objective results for HiFi-GAN and Encodec model after fine-tuning for key-based authentication and watermarking. Signal-to-distortions ratios (SDRs) are computed between and , where SDR_valid represents conditioning on valid keys and SDR_invalid represents conditioning on invalid keys. As shown in Table II ###reference_###, SDR_valid is significantly higher than SDR_invalid, indicating the model\u2019s ability to distinguish between valid and invalid keys while largely preserving watermarks despite distortions."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.2",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Subjective Results",
|
| 93 |
+
"text": "We also conducted a subjective mean opinion score (MOS) test to assess the perceptual quality of the waveforms generated with valid and invalid keys. Sixteen audio engineers rated the audio on a scale of 1-5, 1 being completely unnatural and 5 representing completely natural samples. The results are presented in Table III ###reference_###. For the Encodec model, \u201dReal\u201d refers to unaltered samples from the MTG dataset and \u201dWatermarked\u201d refers to samples watermarked using the SilentCipher-32kHz model. Similarly, for the HiFi-GAN model, \u201dReal\u201d and \u201dWatermarked\u201d represent unaltered and watermarked samples of the VCTK dataset using SilentCipher-22.05kHz. \u201dValid\u201d and \u201dInvalid\u201d refer to the generated samples conditioned upon valid and invalid keys, respectively. The subjective results align with the objective metrics, demonstrating the model\u2019s ability to generative high-quliaty or degraded samples based on key validity.\n###figure_5###"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.3",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Probing the HiFi-GAN model",
|
| 99 |
+
"text": "We evaluated HiFi-GAN on 200 samples across all keys and plotted the minimum, average, and maximum SDR for valid and invalid keys as shown in Figure 2 ###reference_### and Figure 3 ###reference_###, respectively. Approximately 12% of the valid keys achieved an average SDR below 25dB and around 1% of the invalid keys achieve an average SDR above 20dB. As the SDR for a specific key across the samples does not vary much, shown by the small gap between the maximum and minimum SDR for each key, it is easy to verify if a key works well by evaluating the SDR on a few samples and discarding them if their SDR\u2019s lie beyond a certain threshold.\nWe also conducted a white-box attack on the HiFi-GAN model to remove the embedded watermark by adding a small Gaussian noise to the output of each layer of the model. We vary the standard deviation of the added Gaussian noise and plot the accuracy of the watermark as a function of SDR of the generated samples. Although high-energy Gaussian noise degraded the accuracy of the embedded watermark to zero, this occured alongsize significant degradation in the SDR of the generated sample. For low-energy noise, the embedded watermark is detectable with a high accuracy.\nFinally, we explored the HiFi-GAN model\u2019s scalability by plotting SDR_valid and SDR_invalid as a function of the total number of keys while keeping the number of valid keys fixed at 655 (Figure 4 ###reference_###). The results indicate that SDR_valid and SDR_invalid remain distinct as the total key size increases."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "VI Conclusions",
|
| 105 |
+
"text": "We present a novel approach for authenticating generative AI models in white-box scenarios, where users have full access to model parameters. The effectiveness of the proposed method is demonstrated through comprehensive objective and subjective evaluations on the HiFi-GAN and Encodec models. Future work will focus on minimizing perceptible distortions in the generated outputs for valid cases and expanding both the number of valid keys and the total key size."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [],
|
| 109 |
+
"tables": {
|
| 110 |
+
"1": {
|
| 111 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Base Silent Cipher Model Objective Scores. We compare the baselines using objective test scores by simulating various attacks. SDR: SDR between watermarked and original signal, eq: random equalization, gaus: additive Gaussian noise of 40dB, quant: 16-bit floating-point Quantization, time_jit: time-jittering, resamp: random resampling from 6.4kHz to 16kHz and orig: No attacks.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.1\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.1.1\" style=\"font-size:80%;\">Models</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.2\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.2.1\" style=\"font-size:80%;\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.3\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.3.1\" style=\"font-size:80%;\">SDR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.4\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.4.1\" style=\"font-size:80%;\">eq</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.5\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.5.1\" style=\"font-size:80%;\">gaus</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.6\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.6.1\" style=\"font-size:80%;\">mp3_64k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.7\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.7.1\" style=\"font-size:80%;\">mp3_128k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.8\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.8.1\" style=\"font-size:80%;\">mp3_256k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.9\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.9.1\" style=\"font-size:80%;\">ogg_64k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.10\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.10.1\" style=\"font-size:80%;\">ogg_128k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.11\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.11.1\" style=\"font-size:80%;\">ogg_256k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.12\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.12.1\" style=\"font-size:80%;\">quant</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.13\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.13.1\" style=\"font-size:80%;\">resamp</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.14\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.14.1\" style=\"font-size:80%;\">time_jit</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.1.1.15\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1.15.1\" style=\"font-size:80%;\">orig</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.1\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.1.1\" style=\"font-size:80%;\">SC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.2\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.2.1\" style=\"font-size:80%;\">VCTK</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.3\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.3.1\" style=\"font-size:80%;\">31.03</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.4\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.4.1\" style=\"font-size:80%;\">0.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.5\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.5.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.6\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.6.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.7\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.7.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.8\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.8.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.9\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.9.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.10\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.10.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.11\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.11.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.12\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.12.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.13\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.13.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.14\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.14.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.2.1.15\"><span class=\"ltx_text\" id=\"S3.T1.3.2.1.15.1\" style=\"font-size:80%;\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.1\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.1.1\" style=\"font-size:80%;\">HiFi-GAN+SC</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.2\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.2.1\" style=\"font-size:80%;\">VCTK</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.3\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.3.1\" style=\"font-size:80%;\">30.26</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.4\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.4.1\" style=\"font-size:80%;\">0.99</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.5\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.5.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.6\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.6.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.7\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.7.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.8\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.8.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.9\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.9.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.10\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.10.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.11\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.11.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.12\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.12.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.13\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.13.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.14\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.14.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.2.15\"><span class=\"ltx_text\" id=\"S3.T1.3.3.2.15.1\" style=\"font-size:80%;\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.1\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.1.1\" style=\"font-size:80%;\">SC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.2\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.2.1\" style=\"font-size:80%;\">MTG</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.3\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.3.1\" style=\"font-size:80%;\">32.39</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.4\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.4.1\" style=\"font-size:80%;\">0.99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.5\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.5.1\" style=\"font-size:80%;\">0.99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.6\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.6.1\" style=\"font-size:80%;\">0.99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.7\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.7.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.8\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.8.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.9\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.9.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.10\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.10.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.11\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.11.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.12\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.12.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.13\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.13.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.14\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.14.1\" style=\"font-size:80%;\">0.99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.3.15\"><span class=\"ltx_text\" id=\"S3.T1.3.4.3.15.1\" style=\"font-size:80%;\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.5.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.1\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.1.1\" style=\"font-size:80%;\">Encodec+SC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.2\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.2.1\" style=\"font-size:80%;\">MTG</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.3\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.3.1\" style=\"font-size:80%;\">30.78</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.4\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.4.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.5\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.5.1\" style=\"font-size:80%;\">0.99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.6\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.6.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.7\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.7.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.8\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.8.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.9\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.9.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.10\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.10.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.11\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.11.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.12\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.12.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.13\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.13.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.14\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.14.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.4.15\"><span class=\"ltx_text\" id=\"S3.T1.3.5.4.15.1\" style=\"font-size:80%;\">1.00</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 112 |
+
"capture": "TABLE I: Base Silent Cipher Model Objective Scores. We compare the baselines using objective test scores by simulating various attacks. SDR: SDR between watermarked and original signal, eq: random equalization, gaus: additive Gaussian noise of 40dB, quant: 16-bit floating-point Quantization, time_jit: time-jittering, resamp: random resampling from 6.4kHz to 16kHz and orig: No attacks."
|
| 113 |
+
},
|
| 114 |
+
"2": {
|
| 115 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Objective Test Results. SDR Valid: SDR (in dB) of the generated sample when conditioned on a valid key, SDR Invalid: SDR (in dB) of the generated sample when conditioned on an invalid key. For other notations, refer to Table <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.07743v2#S3.T2\" title=\"TABLE II \u2023 III-D Total Loss \u2023 III Proposed Method \u2023 LOCKEY: A Novel Approach to Model Authentication and Deepfake Tracking\"><span class=\"ltx_text ltx_ref_tag\">II</span></a> captions</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.4.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.1\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.1.1\" style=\"font-size:80%;\">Models</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.2\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.2.1\" style=\"font-size:80%;\">SDR Valid</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.3\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.3.1\" style=\"font-size:80%;\">SDR Invalid</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.4\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.4.1\" style=\"font-size:80%;\">eq</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.5\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.5.1\" style=\"font-size:80%;\">gaus</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.6\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.6.1\" style=\"font-size:80%;\">mp3_64k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.7\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.7.1\" style=\"font-size:80%;\">mp3_128k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.8\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.8.1\" style=\"font-size:80%;\">mp3_256k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.9\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.9.1\" style=\"font-size:80%;\">ogg_64k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.10\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.10.1\" style=\"font-size:80%;\">ogg_128k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.11\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.11.1\" style=\"font-size:80%;\">ogg_256k</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.12\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.12.1\" style=\"font-size:80%;\">quant</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.13\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.13.1\" style=\"font-size:80%;\">resamp</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.14\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.14.1\" style=\"font-size:80%;\">time_jit</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T2.4.1.1.15\"><span class=\"ltx_text\" id=\"S3.T2.4.1.1.15.1\" style=\"font-size:80%;\">orig</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.4.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.1\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.1.1\" style=\"font-size:80%;\">HiFi-GAN</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.2\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.2.1\" style=\"font-size:80%;\">25.95</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.3\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.3.1\" style=\"font-size:80%;\">1.28</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.4\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.4.1\" style=\"font-size:80%;\">0.99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.5\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.5.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.6\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.6.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.7\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.7.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.8\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.8.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.9\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.9.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.10\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.10.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.11\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.11.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.12\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.12.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.13\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.13.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.14\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.14.1\" style=\"font-size:80%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T2.4.2.1.15\"><span class=\"ltx_text\" id=\"S3.T2.4.2.1.15.1\" style=\"font-size:80%;\">1.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.1\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.1.1\" style=\"font-size:80%;\">Encodec</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.2\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.2.1\" style=\"font-size:80%;\">23.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.3\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.3.1\" style=\"font-size:80%;\">3.72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.4\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.4.1\" style=\"font-size:80%;\">0.91</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.5\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.5.1\" style=\"font-size:80%;\">0.94</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.6\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.6.1\" style=\"font-size:80%;\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.7\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.7.1\" style=\"font-size:80%;\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.8\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.8.1\" style=\"font-size:80%;\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.9\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.9.1\" style=\"font-size:80%;\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.10\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.10.1\" style=\"font-size:80%;\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.11\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.11.1\" style=\"font-size:80%;\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.12\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.12.1\" style=\"font-size:80%;\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.13\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.13.1\" style=\"font-size:80%;\">0.88</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.14\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.14.1\" style=\"font-size:80%;\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T2.4.3.2.15\"><span class=\"ltx_text\" id=\"S3.T2.4.3.2.15.1\" style=\"font-size:80%;\">0.97</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 116 |
+
"capture": "TABLE II: Objective Test Results. SDR Valid: SDR (in dB) of the generated sample when conditioned on a valid key, SDR Invalid: SDR (in dB) of the generated sample when conditioned on an invalid key. For other notations, refer to Table II captions"
|
| 117 |
+
},
|
| 118 |
+
"3": {
|
| 119 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Subjective Scores with 95% confidence intervals</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T3.8.9.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S3.T3.8.9.1.1\"><span class=\"ltx_text\" id=\"S3.T3.8.9.1.1.1\" style=\"font-size:80%;\">Encodec</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S3.T3.8.9.1.2\"><span class=\"ltx_text\" id=\"S3.T3.8.9.1.2.1\" style=\"font-size:80%;\">MOS</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S3.T3.8.9.1.3\"><span class=\"ltx_text\" id=\"S3.T3.8.9.1.3.1\" style=\"font-size:80%;\">HiFi-GAN</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T3.8.9.1.4\"><span class=\"ltx_text\" id=\"S3.T3.8.9.1.4.1\" style=\"font-size:80%;\">MOS</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T3.2.2.3\"><span class=\"ltx_text\" id=\"S3.T3.2.2.3.1\" style=\"font-size:80%;\">Real</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S3.T3.1.1.1\">\n<span class=\"ltx_text\" id=\"S3.T3.1.1.1.1\" style=\"font-size:80%;\">4.06 </span><span class=\"ltx_text\" id=\"S3.T3.1.1.1.2\" style=\"font-size:80%;\"> 0.22</span>\n</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T3.2.2.4\"><span class=\"ltx_text\" id=\"S3.T3.2.2.4.1\" style=\"font-size:80%;\">Real</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T3.2.2.2\">\n<span class=\"ltx_text\" id=\"S3.T3.2.2.2.1\" style=\"font-size:80%;\">4.31 </span><span class=\"ltx_text\" id=\"S3.T3.2.2.2.2\" style=\"font-size:80%;\"> 0.20</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T3.4.4.3\"><span class=\"ltx_text\" id=\"S3.T3.4.4.3.1\" style=\"font-size:80%;\">Watermarked</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.3.3.1\">\n<span class=\"ltx_text\" id=\"S3.T3.3.3.1.1\" style=\"font-size:80%;\">3.92 </span><span class=\"ltx_text\" id=\"S3.T3.3.3.1.2\" style=\"font-size:80%;\"> 0.22</span>\n</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T3.4.4.4\"><span class=\"ltx_text\" id=\"S3.T3.4.4.4.1\" style=\"font-size:80%;\">Watermarked</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.4.4.2\">\n<span class=\"ltx_text\" id=\"S3.T3.4.4.2.1\" style=\"font-size:80%;\">4.35 </span><span class=\"ltx_text\" id=\"S3.T3.4.4.2.2\" style=\"font-size:80%;\"> 0.16</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T3.6.6.3\"><span class=\"ltx_text\" id=\"S3.T3.6.6.3.1\" style=\"font-size:80%;\">Valid</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.5.5.1\">\n<span class=\"ltx_text\" id=\"S3.T3.5.5.1.1\" style=\"font-size:80%;\">3.76 </span><span class=\"ltx_text\" id=\"S3.T3.5.5.1.2\" style=\"font-size:80%;\"> 0.22</span>\n</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T3.6.6.4\"><span class=\"ltx_text\" id=\"S3.T3.6.6.4.1\" style=\"font-size:80%;\">Valid</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.6.6.2\">\n<span class=\"ltx_text\" id=\"S3.T3.6.6.2.1\" style=\"font-size:80%;\">3.17 </span><span class=\"ltx_text\" id=\"S3.T3.6.6.2.2\" style=\"font-size:80%;\"> 0.19</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T3.8.8.3\"><span class=\"ltx_text\" id=\"S3.T3.8.8.3.1\" style=\"font-size:80%;\">Invalid</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T3.7.7.1\">\n<span class=\"ltx_text\" id=\"S3.T3.7.7.1.1\" style=\"font-size:80%;\">1.11 </span><span class=\"ltx_text\" id=\"S3.T3.7.7.1.2\" style=\"font-size:80%;\"> 0.07</span>\n</td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T3.8.8.4\"><span class=\"ltx_text\" id=\"S3.T3.8.8.4.1\" style=\"font-size:80%;\">Invalid</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T3.8.8.2\">\n<span class=\"ltx_text\" id=\"S3.T3.8.8.2.1\" style=\"font-size:80%;\">1.50 </span><span class=\"ltx_text\" id=\"S3.T3.8.8.2.2\" style=\"font-size:80%;\"> 0.13</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 120 |
+
"capture": "TABLE III: Subjective Scores with 95% confidence intervals"
|
| 121 |
+
}
|
| 122 |
+
},
|
| 123 |
+
"image_paths": {
|
| 124 |
+
"1": {
|
| 125 |
+
"figure_path": "2409.07743v2_figure_1.png",
|
| 126 |
+
"caption": "Figure 1: Model Training & Inference Flow",
|
| 127 |
+
"url": "http://arxiv.org/html/2409.07743v2/x1.png"
|
| 128 |
+
},
|
| 129 |
+
"2": {
|
| 130 |
+
"figure_path": "2409.07743v2_figure_2.png",
|
| 131 |
+
"caption": "Figure 2: SDR across valid keys. The keys are sorted ascendingly based on their mean SDR on 200 samples",
|
| 132 |
+
"url": "http://arxiv.org/html/2409.07743v2/x2.png"
|
| 133 |
+
},
|
| 134 |
+
"3": {
|
| 135 |
+
"figure_path": "2409.07743v2_figure_3.png",
|
| 136 |
+
"caption": "Figure 3: SDR across invalid keys. The keys are sorted descendingly based on their mean SDR on 200 samples",
|
| 137 |
+
"url": "http://arxiv.org/html/2409.07743v2/x3.png"
|
| 138 |
+
},
|
| 139 |
+
"4": {
|
| 140 |
+
"figure_path": "2409.07743v2_figure_4.png",
|
| 141 |
+
"caption": "Figure 4: Valid-Invalid SDR across no of total keys",
|
| 142 |
+
"url": "http://arxiv.org/html/2409.07743v2/x4.png"
|
| 143 |
+
},
|
| 144 |
+
"5": {
|
| 145 |
+
"figure_path": "2409.07743v2_figure_5.png",
|
| 146 |
+
"caption": "Figure 5: Distortions Using Gaussian Noise. The numbers on corresponding to each data point denote the standard deviation of the added gaussian noise.",
|
| 147 |
+
"url": "http://arxiv.org/html/2409.07743v2/x5.png"
|
| 148 |
+
}
|
| 149 |
+
},
|
| 150 |
+
"validation": true,
|
| 151 |
+
"references": [],
|
| 152 |
+
"url": "http://arxiv.org/html/2409.07743v2"
|
| 153 |
+
}
|
20240921/2409.09467v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2409.09539v2.json
ADDED
|
@@ -0,0 +1,176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Ensuring System-Level Protection against Eavesdropping Adversaries in Distributed Dynamical Systems",
|
| 3 |
+
"abstract": "In this work, we address the objective of protecting the states of a distributed dynamical system from eavesdropping adversaries. We prove that state-of-the-art distributed algorithms, which rely on communicating the agents\u2019 states, are vulnerable in that the final states can be perfectly estimated by any adversary including those with arbitrarily small eavesdropping success probability.\nWhile existing literature typically adds an extra layer of protection, such as encryption or differential privacy techniques, we demonstrate the emergence of a fundamental protection quotient in distributed systems when innovation signals are communicated instead of the agents\u2019 states.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Privacy against eavesdropping adversaries has been a major concern in distributed systems [1 ###reference_b1###]. To protect data from eavesdropping adversaries one needs to deploy some mechanism where the eavesdroppers are not able to perfectly decode the data from the intercepted communications.\nOften, these mechanisms are to be deployed without having information about the adversary\u2019s capability and knowledge.\nIn this work, we consider eavesdropping adversaries and protection against such adversaries [2 ###reference_b2###, 3 ###reference_b3###] for a class of systems following the setup of Figure 1 ###reference_###.\nIn general, there are mainly two techniques for dealing eavesdropping adversaries, namely differential privacy [4 ###reference_b4###, 5 ###reference_b5###] and secure multiparty computation [6 ###reference_b6###, 7 ###reference_b7###].\nDifferential privacy is a noncryptographic method for preserving privacy by carefully adding noise to exchanged messages. It is commonly used due to its computational simplicity. However, there is an inherent trade-off between privacy and accuracy that one has to take into account due to the nature of the method that intentionally adds noise to communication.\nSecure multiparty computation, on the other hand, refers to cryptographic techniques to ensure privacy in a distributed network, where the goal is to\nevaluate a function of a number of parties\u2019 private data without revealing each party\u2019s data to others; see, e.g., [7 ###reference_b7###, 6 ###reference_b6###]. As most of secure multiparty computation protocols trade algorithmic\ncomplexity for security, they may not be suited for many practical applications involving, e.g., systems with limited resources or subject to hard real-time constraints.\n###figure_1### Besides, existing approaches to address privacy concerns of distributed systems has mainly focused on the protection of the initial states of the agents [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###].\nHowever, in many distributed systems (e.g., distributed optimization [12 ###reference_b12###], rendezvous problems [13 ###reference_b13###], synchronization [14 ###reference_b14###], federated learning [15 ###reference_b15###]) initial states often are of less importance and sometimes chosen arbitrarily; see e.g., [16 ###reference_b16###, 17 ###reference_b17###].\nInstead, the final state of the agents are more important because they represent the solution of a certain decision making problem.\nFor example, in networked consensus problems [18 ###reference_b18###], an agent interacts with its neighbors and contains the states of the neighboring agents. Here, the objective is to agree on the final state to which all the agents\u2019 states will converge.\nDistributed consensus optimization is a variant of the network consensus problems where a group of agents exchange their local information to collaboratively optimize a network-wide objective function, and\nIn the context of federated learning, where each client/agent shared its local weights of a trained neural network with a server for the purpose of aggregating the distributed information. The server sends the aggregated weight () back to the client for further update. In these cases, if the final state of one agent is eavesdropped, then so is that of the whole system. This requires modifications in privacy metrics as well as in algorithms/methods to achieve privacy.\nContributions: The main contribution of this work is the analysis of system-level protection against an eavesdropping adversary under an innovation-sharing communication protocol.\nWe derive an analytical expression for the achievable protection against a class of eavesdropping adversaries and demonstrate how the eavesdropper\u2019s capabilities affect this protection.\nOur analysis reveals a fundamental connection between the achievable protection and the total quadratic variation of the agent\u2019s state trajectory.\nBy leveraging this analysis, we then demonstrate how the proposed method can be applied to protect the solutions of distributed optimization problems.\nTo this end, we develop a Distributed Innovation-Sharing Consensus Optimization (DICO) algorithm.\nWe also discuss the effects of the algorithm\u2019s parameters on protection and convergence speed, as well as their trade-offs.\nOrganization We formally state the problem in Section II ###reference_###, and the adversary\u2019s eavesdropping model is discussed in Section II-B ###reference_###.\nThe system-level protection against such adversaries is analyzed in Section III ###reference_###, where we leverage the protocol of sharing the state increment (i.e., ) instead of the true state as an effective means of protection\u2014such protocols are often categorized as innovation sharing schemes.\nIn Section IV ###reference_### we investigate distributed consensus optimization problems, as a special case of our developed theory and analysis.\nThe evidence of a system-level privacy is demonstrated using numerical simulation on a distributed optimization problem in Section V ###reference_###.\nThe effects of certain hyperparameters of the optimization algorithm on the achieved protection is thoroughly discussed in that section.\nNotation:\nLet for any . For a matrix , let denote\nits element, and its transpose.\nA directed graph consists of a set of nodes \nand a set of directed edges.\nA directed path is a sequence of edges in the form .\nThe graph\n is strongly connected if there is a\ndirected path from each node to any other node.\nNode is an in-neighbor (respectively, out-neighbor) to node if (respectively, ).\nFor each node , we use and to denote the sets of its in-neighbors and out-neighbors, respectively. Assume that and ."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Problem Formulation",
|
| 15 |
+
"text": "We consider a scenario where an agent interacts with a system over a compromised communication channel (see Figure 1 ###reference_###).\nThe agent is required to transmit its states to, and receive data from, the system at every time instance over this channel.\nThe communication channel is compromised due to the presence of an eavesdropper that can intercept the incoming and outgoing messages of this agent with probability .\nThe eavesdropper\u2019s objective is to estimate the agent\u2019s state as closely as possible.\nIn some applications, the eavesdropper\u2019s objective is to only estimate the \u2018final state\u2019 of the agent, i.e., , where could be finite or infinite.\nThe agent may not be aware of the presence of such adversaries."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Agent Dynamic Model and Assumption",
|
| 21 |
+
"text": "The agent follows the dynamics\nwhere , and are the agent\u2019s states, received data, and control input, respectively, at time . Here, is the message that is sent from the agent to the system at time and in general can be a function that depends on the agent\u2019s states up to time .\nThe agent state following dynamic (1 ###reference_###) converges for any .\nThis assumption means that we consider only stable dynamics, which applies to numerous practical applications as mentioned in the previous section. This is also a key difference between our model and that in [19 ###reference_b19###], which is an unstable linear dynamics instead."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Eavesdropping Adversaries",
|
| 27 |
+
"text": "Assume that the adversary eavesdrops all the outgoing (and incoming) transmissions of the agent.\nLet denote the adversary\u2019s estimate at time .\nThe eavesdropping mechanism is probabilistic and may lead to failed interception of the transmitted messages, similar to the models in[20 ###reference_b20###, 19 ###reference_b19###].\nThe success rate of eavesdropping depends on several factors including, e.g.,\nsignal-to-interference-plus-noise ratio,\nchannel condition,\nand directionalities of transmitting and receiving antennas.\nThis limitation in eavesdropping capability is often modeled by a randomness in the eavesdropping outcome.\nLet be a Bernoulli random variable such that\nThe random variables and are independent of each other for all and does not depend on the states of the physical system .\nWe denote , for some .\nWe exclude since, in this case, the adversary intercepts everything and hence, no protection is achievable.\nSimilarly, we exclude since it implies that no adversary is present.\nThe adversaries know the form of the exchanged messages.\nFor example, if the physical system exchanges at time , then the adversary knows the function and intercepts .\nAssumption 2 ###reference_umption2### implies that the adversary knows whether true states are communicated or not.\nHere, could be a deterministic or randomized function (e.g., quantization, encryption, adding noise to ).\nIn general, may also depend on the past .\nIn this paper we consider the form \nand demonstrate the benefits of this simple form in retaining privacy.\nIn this paper, we embark on this direction by studying the following simple class of adversary dynamics.\nLet denote the message that the physical system broadcasts at time , and is the adversary\u2019s estimate of . Consider the following class of estimation dynamics\nfor some initial state chosen by the adversary (which can be a random vector) and a sequence of weights designed by the adversary. Here, for any ,\nif , then\n,\ni.e., the adversary simply takes the eavesdropping outcome at time as its update, without considering past information.\nOn the other hand, if , then when and otherwise. In other words, if the current interception is unsuccessful, the adversary uses its last estimate with some weight .\nThis is also equivalent to using the last successfully intercepted message, say , with the weight , considering the number of recent unsuccessful attempts.\nIn general, the time-varying weight depends on the adversary\u2019s knowledge and possibly on .\nThis is an open problem and left for future work.\nTo continue the analysis, we instead focus on the following special case\nHere, we assume simply to decouple and .\nNote that the state will carry different meanings depending on the type of exchanged messages . The adversary\u2019s goal is to use to estimate the convergence point of ."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "II-C -Protection Against Adversaries",
|
| 33 |
+
"text": "Let denote the adversary\u2019s estimate of the state at time and let denote the corresponding estimation error.\nSince the eavesdropping success is random, the estimated state and the estimation error are random variables for all .\nWe consider the following distortion based metric to quantify protection against the adversary.\nThe agent\u2019s state is -protected against the eavesdropping adversary for some if\nThe proposed protection metric is similar in principle to the distortion metric proposed in [21 ###reference_b21###].\nHere, we use the second moment to quantify the quality of protection since it is tightly coupled to the entropy-power of the random variable .\nIn particular, a lower bound on immediately provides a lower bound on the entropy-power of and consequently a lower bound on the randomness of .111\nEntropy power of a random vector is , where is the entropy of . For any random vector , we have where the equality holds only for Gaussian random vectors with uncorrelated components."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "II-D Innovation-shared Communication",
|
| 39 |
+
"text": "As we will see, there will be no protection when the agent exchanges its true state, i.e., . Thus, we propose to use an innovation-shared communication protocol where the agent shares instead of at time .\nIn this protocol, is defined as follows:\nTo the best our knowledge, this particular communication protocol was first used in [19 ###reference_b19###] to achieve protection against eavesdropping adversaries for an remote estimation application.\nLater, this protocol was used in [22 ###reference_b22###] to study the protection against eavesdropping adversaries in networked consensus problems.\nAlthough the use of (7 ###reference_###) is relatively new in understanding privacy, however, it has been used in other control problems such as in quantized optimal control.\nThe benefit of using this communication protocol is that the environment can perfectly decode from \u2019s by using the relationship .\nAlthough our communication model (7 ###reference_###) appears simple, it has several benefits. First and foremost, as we will show later, it is sufficient for rendering setup of Figure 1 ###reference_### unprotected when the agent shares , thus motivating our modifications to those algorithms to enhance their protection.\nSecond, it enables us to conduct a rigorous analysis and provide insights into the effect of adversary\u2019s model parameters to protection, serving as basis for further extensions.\nProblem Statement: The objective of this paper is to analyze and derive the system level protection of system (1 ###reference_###) under the innovation-sharing scheme (7 ###reference_###) against the eavesdropping adversaries in Section II-B ###reference_###."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "III System-Level Protection Analysis",
|
| 45 |
+
"text": ""
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-A State-based Communication",
|
| 51 |
+
"text": "In this section, we show that any system in the form of Figure 1 ###reference_### is unprotected against eavesdropping adversaries regardless of the eavesdropping probability .\nThe state of the agent is -protected if .\nLet and . Then, (4 ###reference_###) simply becomes\nwhere is as defined in (3 ###reference_###). In this case, is the adversary\u2019s estimate of , and thus \nThe estimation error follows the dynamics \nwhere .\nThe estimation error is a random process due to the presence of the Bernoulli random variables .\nLet\n. We have\nwhere we have used the fact that the random variables is independent of and .\nNote that since converges (Assumption 1 ###reference_umption1###).\nConsequently, from (8 ###reference_###)\u2013(9 ###reference_###), we obtain and as .\nThis completes the proof.\n\u220e\nThe intuition behind this result is that, an adversary is able to intercept a transmission far in the future with probability , and hence, it obtains for a large enough .222\nLet us define an event .\nThe complementary event .\nTherefore, since .\nConsequently, for all .\nThe event denotes a successful interception at time later than .\n\nIn fact, if and can be inferred/decoded from , then the agent state is also not protected."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-B Innovation-based communications",
|
| 57 |
+
"text": "Let the agent exchange state increments instead of actual state . Thus,\nwhere we define .\nIn this case the exchanged message at time is .\nThus, (4 ###reference_###) becomes\nHere, is the adversary\u2019s estimate of and thus\nwith .\nThen the error satisfies\nwith .\nNext, taking expectation on both sides of (11 ###reference_###) yields\nwhere , , and with .\nFurthermore,\nwith .\nThus, taking expectations on both sides of (13 ###reference_###) yields\nand .\nUsing the above relations,\nwe can find the limit as follows.\nSuppose and .\nLet , , and . Then,\nwhere and .\nSee Appendix VII-A ###reference_###.\n\u220e\nThis result holds without any assumption on the agent\u2019s dynamics , its control objective, or the structure of , which makes out analysis applicable to a wide range of problems.\nHere, and \nindirectly affect the protection amount through the variable .\nDifferent choices for or will affect the norm differently, and consequently, resulting in different amounts of protection.\nLet us note the following. First, conditions and are sufficient for the stability and boundedness of the systems in (14 ###reference_###)\u2013(16 ###reference_###), and hence the finiteness of given above.333Note that also implies . Although and , which is finite, it does not imply that ; to see this, consider, e.g., for .\nSecond, since for any , it follows that, to minimize the adversary must choose , or equivalently, select to be a deterministic quantity.\nThird, depends linearly on as follows\nHowever, this is rather complicated for computing ; in practice, we use the recursive form in (14 ###reference_###) instead. Below, we provide a lower bound for the protection that does not depend on explicitly; see Appendix VII-B ###reference_### for a proof.\nFor any , let . Then\nwhere equality holds at .\nSee Appendix VII-B ###reference_###.\n\u220e\nSince the lower bound given in (18 ###reference_###) is valid for any , one may maximize the RHS in (18 ###reference_###) with respect to to obtain a tighter bound.\nOn the other hand, the adversary should minimize this lower bound by selecting and appropriately.\nSince affects the part , an optimal clearly depends on and .\nIn absence of knowledge on and , a rational choice is to pick , which will minimize the worst-case value of .\nFinding the optimal is even more complicated as it depends not only on and but also on , where is affected by the dynamics of both the agent and the system.\nFinding the optimal appears to be equally challenging as finding directly.\nIn the following, we investigate two special cases where or ."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2.1",
|
| 61 |
+
"parent_section_id": "3.2",
|
| 62 |
+
"section_name": "III-B1 The case",
|
| 63 |
+
"text": "By (17 ###reference_###), we have\nClearly, the first two terms depend on and but not directly.\nOn the other hand, for a fixed , the last term depends not only on initial condition but also on the state\u2019s total quadratic variation .\nThis in turn has the following two consequences:\n(i) For given dynamics of the agent and the system,\nchoosing far from the convergence point will yield better protection, and (ii) for a given , a dynamic\nthat produces a path with higher quadratic variation also has better protection.\nIn the latter case, it is tempting to conclude that faster convergence yields a smaller protection level.\nHowever, it could happen that, starting from the same initial condition, a dynamic with faster convergence may exhibit more (and possibly larger) transient oscillations and thus incurs a higher quadratic variation, hence improved protection.\nFinally, we can quantify the amount of randomness in the adversary\u2019s estimates of for the case .\nUsing (16 ###reference_###), one may write .\nFurthermore, we also obtain that .\nConsequently, the entropy power of is lower bounded by .\nWhile the first and second order moments partially characterize a random variable, the entropy power is a direct indication of its randomness.\nNotice that and therefore, the innovation-share communication scheme ensures a lower bound of on the asymptotic entropy power of the adversary error estimate."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.2.2",
|
| 67 |
+
"parent_section_id": "3.2",
|
| 68 |
+
"section_name": "III-B2 The case",
|
| 69 |
+
"text": "It is easy to see that (17 ###reference_###) yields\nClearly, is the optimal choice for the adversary. Moreover, the adversary in fact obtains an unbiased estimate of in this case.\nThe last expression of can be further simplified and given in the following corollary; see Appendix VII-C ###reference_### for a proof.\nIf and , then and .\nThis result shows that, to obtain an unbiased estimate, the adversary must always use the last successfully intercepted message. Additionally, similar to the previous case, both and the mismatch between and affect the protection of the algorithm.\nUsing the dynamic (14 ###reference_###), one may write\n and consequently,\n.\nThis shows that the achieved protection is related to the variation in the innovation signal whereas in the case, it is related to the quadratic variation of the state .\nThis is not surprising since for the case , the adversary\u2019s estimate of depends on the last intercepted message for some .\nTherefore, the variation in \u2019s trajectory should directly affect the protection.\nAn unbiased estimate may not be always preferable if the adversary\u2019s objective is to minimize the amount of protection.\nBased on the expressions of protection for both and along with , we notice that neither of them always dominates the other.\nThe best choice for depends on several parameters including the dynamics (1 ###reference_###), the objective of the agent, and the system itself.\nWithout such knowledge, the adversary is unable to determine which value of is the best to use.\nHowever, some qualitative analysis could be performed here; e.g., when , it appears beneficial to use than since in the latter case as .\nA thorough investigation on the choice of is a potential future direction to pursue."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "IV Application to Distributed Optimization",
|
| 75 |
+
"text": "Next we discuss application of our approach to distributed consensus optimization.\nConsider a network of nodes, where the underlying communication is characterized by a fixed directed graph .\nThe objective of all the nodes is to solve the following problem in a distributed fashion:\nwhere is the local cost function of node .\nWe consider a class of first-order distributed algorithms for solving this problem shown in Algorithm 1 ###reference_### below,\nwhere each node repeatedly updates its local state\nbased on its local gradient and information exchanged with its direct neighbors. The goal here is for all nodes to reach a consensus that is also an optimal solution of (21 ###reference_###).\nHere, is the local estimate of node , is some fixed step size,\n the weight associated with link , and a local estimate of global gradient , which is updated using only available local information\naccording to some mapping .\nTo implement the algorithm, it is important to note that, at every time step , each node needs to send its local estimate to its out-neighbors and receive from its in-neighbors .\nHere, we use to denote the vector and to denote .\nNext, we mention a few algorithms that belong to DCO. First, a version of the well-studied distributed (sub-)gradient method (see, e.g., [23 ###reference_b23###]) can be obtained with for arbitrary for all , and some diminishing step size sequence .\nThe distributed dual averaging in [24 ###reference_b24###] also takes a similar form with involving a type of projection with respect to some proximal function.\nSecond, the following choice\ncorresponds to a variant of the algorithm in [16 ###reference_b16###].\nIn this set up, each node is interacting with the system through its in- and out-neighbors.\nHere we consider a single adversary that can pick any node to eavesdrop.\nTo compute a most conservative estimate of the protection we consider the minimum of the protection of the nodes.\nThat is, we define\nto be the protection of the network in this case. Note that the presence of multiple adversaries is a potential future research direction, especially when the adversaries can communicate with each other.\nTheorem 1 ###reference_orem1### immediately shows that Algorithm 1 ###reference_### is not protected.\nTo achieve system level protection with innovation-shared communication scheme, we propose the a modification in the next section."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.1",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "IV-A Optimization with Innovation Communication",
|
| 81 |
+
"text": "Our proposed modification to DCO is a new communication protocol, where nodes communicate innovation values instead of their state values.\nEach agent needs the true state values of their in-neighbors to update their own states (see line 5 of Algorithm 1 ###reference_###).\nIn absence of these true state values, the agents need to perform an extra step to locally compute their in-neighbors\u2019 states:\nwhere is the received innovation signal and is the estimate of the in-neighbors\u2019 state at time .\nThe modified algorithm, named Distributed Innovation-shared Consensus Optimization (DICO), is presented in Algorithm 2 ###reference_###.\nCompared to DCO in terms of memory requirement, DICO further requires each node to maintain an estimate of its neighbors\u2019 states. However, it is important to note that DICO has the same communication overheads per iteration as DCO.\nMore importantly, DCO and DICO are indeed equivalent, and thus the convergence property of DCO carries over to DICO.\nThe convergence of Algorithm 2 ###reference_### is the same as Algorithm 1 ###reference_### if the same step size is used.\nFirst, note that the local estimates are in fact exact at any time , i.e., . This can be seen by comparing their dynamics, where and\n\nThus, DICO is equivalent to\nwhich are identical to DCO. The proof is completed.\n\u220e\nClearly, the innovation-shared method does not alter the convergence of DCO, which achieves exact solutions and is in contrast to existing methods based on differential privacy, where the accuracy of the solution is negatively affected by the amount of privacy.\nUnlike DCO, which is 0-protected against a single adversary, DICO provides a certain level of protection for each node as analyzed section III ###reference_###."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Simulation Results",
|
| 87 |
+
"text": "We consider a logistic regression problem as follows\nwhere each node has access to samples of training data for and is a regularization parameter.\nHere, includes features of the -th sample of node , and is the corresponding label.\nClearly, (23 ###reference_###) is in the form of (21 ###reference_###) with for all . We consider , and for all . We generated the graph randomly while ensuring that it is connected and the weight matrix is doubly stochastic.\nThe dynamic of in DICO follows (22 ###reference_###).\n###figure_2### ###figure_3### ###figure_4### ###figure_5### In the first experiment, we investigate how affects the protection.\nTo that end, we randomly generated the initial states of the nodes (hereafter denoted as ) and considered and the eavesdropping probability .\nBy varying in the range , we illustrate in Fig. 2 ###reference_### both the exact protection derived in Theorem 2 ###reference_orem2### and the lower bound computed in Corollary 1 ###reference_ollary1### with .\nThe optimal choice for is sensitive to the problem parameters.\nFor the first experiment we chose in (23 ###reference_###) and we notice (c.f. Fig. 2(a) ###reference_sf1###) that the optimal value of is approximately which results in the lowest protection.\nThis plot also shows that, in general, negative values are preferred by the adversaries over positive ones.\nTo also demonstrate how sensitive the optimal value of can be, we only changed the parameter to , and the resulting plot (c.f. Fig. 2(b) ###reference_sf2###) is significantly different where small positive values are preferred.\nThis can be explained roughly as follows. Recall that the role of is to capture how decays overall. In the first case, the convergence time is much shorter with significant oscillations\n(in the components) of the state vector causing its time-difference to change signs frequently (c.f. Fig. 3 ###reference_###). Thus, to reflect this behavior, a value is needed. On the other hand, when decreases in the second case, we practically reduce the Lipschitz constant of the objective function, leading to a much slower convergence. In this regime, converges exponentially without oscillations most of the time. As a result, almost does not change signs and thus using would be more suitable.\nThese experiments also show that an unbiased estimate (i.e., ) might not be preferred over .\n###figure_6### ###figure_7### ###figure_8### \n\nNext, we validate the fact that as well as the algorithm parameters (e.g., ) influence the amount of protection.\nThe algorithm parameters control the trajectory taken by the node states and hence directly influencing the and the resulting protection.\nWe fix and vary the parameter in (22 ###reference_###) within the range .\nWe choose one initial state vector randomly and then scaled this initial state to generate initial state vectors .\nFor each pair of , we ran DICO and then computed two quantities: the protection amount and the convergence time defined as the number of iterations taken for the algorithm to converge to a value within of .\nEach line in Fig. 4 ###reference_### corresponds to a fixed .\nAs is scaled, the protection amount increases.\nThe dots on a fixed color line represent different values of .\nAs increases, the dots move from right to left.\nFrom Fig. 4 ###reference_### we notice that, for a fixed value of , the amount of protection increases at the expense of convergence time when is increased.\nThis shows a trade-off between convergence and protection for Algorithm 2 ###reference_###.\nWhile the effect of on the protection (and convergence time) is somewhat straightforward from the expression in (17 ###reference_###), that of , however, is not so obvious.\nHere, impacts the trajectory of , which is also dependent on the objective function in (21 ###reference_###).\nThe effect of on the convergence-protection curve is even more interesting.\nAs is increased for a fixed ,\nthe amount of protection slightly reduces, and then starts increasing with when or .\nThis is because when is sufficiently small, the algorithm converges exponentially without oscillations,\ni.e., , where is a constant and is the convergence rate which increases with . Thus, , which decreases as increases. Large values of can lead to more oscillations,\nhence larger quadratic variations."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "VI Conclusion",
|
| 93 |
+
"text": "In this paper, we have studied a privacy issue of distributed systems against eavesdroppers that can intercept (successfully with some probability) communications with the goal of estimating the agent\u2019s final state.\nWe show that the agents are unprotected in every scenario where they are required to share their states.\nIn contrast, by exchanging the innovation signals, the agent can harness the system-level protection that is inherently present in such systems.\nThe proposed innovation-shared method is a complementary to existing approaches such as differential privacy.\nOne may use differential-privacy or encryption based methods along with our proposed method to obtain a higher amount of privacy than what would have been achievable from using only differential-privacy/encryption based methods.\nSince our approach does not alter the accuracy of the converged solution, using it in juxtaposition to other privacy preserving techniques will not incur further accuracy loss.\nGiven the generic nature of our proposed method and the analysis, one may investigate particular problems (e.g., multi-agent consensus, rendezvous, distributed estimation) and analyze the achievable protection for such problems under the innovation-shared communication scheme.\nIn this work, we considered a class of distributed optimization problems as an example and demonstrate that the algorithm\u2019s parameters (e.g., , ) can in fact improve the achievable protection.\nWe show that there is a fundamental relation between the total quadratic variation of the innovation signal and the achievable protection."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "7",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "VII Appendix",
|
| 99 |
+
"text": "Suppose and . Let and . Then the series and \nexist. In fact, and\nwith .\nBy (15 ###reference_###) and conditions and , we have\nNext, we show that exists, i.e., is a convergent series. Since ,\nit follows that is absolutely convergent and .\nNow consider ; let , , and denote the partial sums. Then\nLet \nThen, by (16 ###reference_###) and (14 ###reference_###) we have\nBy multiplying both sides of (14 ###reference_###) with and then subtracting from the above relation, we obtain\nUsing and , we further have\n\nThus, .\nAs a result,\nSince , ,\nand\nThen,\n\nLetting implies that converges and (24 ###reference_###) holds.\n\u220e"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "7.1",
|
| 103 |
+
"parent_section_id": "7",
|
| 104 |
+
"section_name": "VII-A Proof of Theorem 2",
|
| 105 |
+
"text": "Using (13 ###reference_###), can be expanded as follows\nwhere the last equality is obtained by noting that\nNow define for and .\nRearranging (VII-A ###reference_0###) yields\nUsing (15 ###reference_###) to replace in the last equation yields\nwith .\nNow let \nand note that\nUnrolling this relation, we have\nTo find the limit, we will show that exists. In fact, by Lemma 1 ###reference_ma1###, and exist, and\nThus, by (28 ###reference_###), we have \nwith\nSince and ,\nWe can expand as follows\nwhere the last term can be expressed as\nCombining the relations above with (30 ###reference_###) completes the proof."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "7.2",
|
| 109 |
+
"parent_section_id": "7",
|
| 110 |
+
"section_name": "VII-B Proof of Corollary 1",
|
| 111 |
+
"text": "Note that\n for any , where the first equality follows from Cauchy-Schwartz inequality and second one from . It remains to use (30 ###reference_###)\u2013(31 ###reference_###) and note that these bounds are tight when ."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "7.3",
|
| 115 |
+
"parent_section_id": "7",
|
| 116 |
+
"section_name": "VII-C Proof of Corollary 2",
|
| 117 |
+
"text": "Let us compute . From (13 ###reference_###), we have\n\nNow, using (14 ###reference_###), we also have , which implies\nThus, . Since and as , we have\n.\nGiven and , we have .\nWe now find . It follows from (20 ###reference_###) and Lemma 1 ###reference_ma1### that\n,\nwhere is obtained by using the definitions of and from Lemma 1 ###reference_ma1### and observing that when .\nFinally, is obtained by using the relationship that from Lemma 1 ###reference_ma1### along with the fact that when ."
|
| 118 |
+
}
|
| 119 |
+
],
|
| 120 |
+
"appendix": [],
|
| 121 |
+
"tables": {},
|
| 122 |
+
"image_paths": {
|
| 123 |
+
"1": {
|
| 124 |
+
"figure_path": "2409.09539v2_figure_1.png",
|
| 125 |
+
"caption": "Figure 1: Problem setup",
|
| 126 |
+
"url": "http://arxiv.org/html/2409.09539v2/extracted/5870549/Figures/Setup_3.png"
|
| 127 |
+
},
|
| 128 |
+
"2(a)": {
|
| 129 |
+
"figure_path": "2409.09539v2_figure_2(a).png",
|
| 130 |
+
"caption": "(a) \u03c3=1\ud835\udf0e1\\sigma=1italic_\u03c3 = 1\nFigure 2: Exact protection and lower bound in (18) with \u03b7=1\ud835\udf021\\eta=1italic_\u03b7 = 1.",
|
| 131 |
+
"url": "http://arxiv.org/html/2409.09539v2/x1.png"
|
| 132 |
+
},
|
| 133 |
+
"2(b)": {
|
| 134 |
+
"figure_path": "2409.09539v2_figure_2(b).png",
|
| 135 |
+
"caption": "(b) \u03c3=0.01\ud835\udf0e0.01\\sigma=0.01italic_\u03c3 = 0.01\nFigure 2: Exact protection and lower bound in (18) with \u03b7=1\ud835\udf021\\eta=1italic_\u03b7 = 1.",
|
| 136 |
+
"url": "http://arxiv.org/html/2409.09539v2/x2.png"
|
| 137 |
+
},
|
| 138 |
+
"3(a)": {
|
| 139 |
+
"figure_path": "2409.09539v2_figure_3(a).png",
|
| 140 |
+
"caption": "(a) \u03c3=1\ud835\udf0e1\\sigma=1italic_\u03c3 = 1\nFigure 3: The state trajectory of the first agent.",
|
| 141 |
+
"url": "http://arxiv.org/html/2409.09539v2/x3.png"
|
| 142 |
+
},
|
| 143 |
+
"3(b)": {
|
| 144 |
+
"figure_path": "2409.09539v2_figure_3(b).png",
|
| 145 |
+
"caption": "(b) \u03c3=0.01\ud835\udf0e0.01\\sigma=0.01italic_\u03c3 = 0.01\nFigure 3: The state trajectory of the first agent.",
|
| 146 |
+
"url": "http://arxiv.org/html/2409.09539v2/x4.png"
|
| 147 |
+
},
|
| 148 |
+
"4(a)": {
|
| 149 |
+
"figure_path": "2409.09539v2_figure_4(a).png",
|
| 150 |
+
"caption": "Figure 4: Convergence speed vs. protection from Theorem 2 by varying \u03b1\ud835\udefc\\alphaitalic_\u03b1 and x0subscript\ud835\udc650x_{0}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 151 |
+
"url": "http://arxiv.org/html/2409.09539v2/x5.png"
|
| 152 |
+
},
|
| 153 |
+
"4(b)": {
|
| 154 |
+
"figure_path": "2409.09539v2_figure_4(b).png",
|
| 155 |
+
"caption": "Figure 4: Convergence speed vs. protection from Theorem 2 by varying \u03b1\ud835\udefc\\alphaitalic_\u03b1 and x0subscript\ud835\udc650x_{0}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 156 |
+
"url": "http://arxiv.org/html/2409.09539v2/x6.png"
|
| 157 |
+
},
|
| 158 |
+
"4(c)": {
|
| 159 |
+
"figure_path": "2409.09539v2_figure_4(c).png",
|
| 160 |
+
"caption": "Figure 4: Convergence speed vs. protection from Theorem 2 by varying \u03b1\ud835\udefc\\alphaitalic_\u03b1 and x0subscript\ud835\udc650x_{0}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 161 |
+
"url": "http://arxiv.org/html/2409.09539v2/x7.png"
|
| 162 |
+
}
|
| 163 |
+
},
|
| 164 |
+
"validation": true,
|
| 165 |
+
"references": [
|
| 166 |
+
{
|
| 167 |
+
"1": {
|
| 168 |
+
"title": "Prentice-Hall, Inc., 1989.",
|
| 169 |
+
"author": "D. P. Bertsekas and J. N. Tsitsiklis, Parallel and distributed computation: numerical methods.",
|
| 170 |
+
"venue": null,
|
| 171 |
+
"url": null
|
| 172 |
+
}
|
| 173 |
+
}
|
| 174 |
+
],
|
| 175 |
+
"url": "http://arxiv.org/html/2409.09539v2"
|
| 176 |
+
}
|
20240921/2409.10925v2.json
ADDED
|
@@ -0,0 +1,134 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "HGSLoc: 3DGS-based Heuristic Camera Pose Refinement",
|
| 3 |
+
"abstract": "Visual localization refers to the process of determining camera poses and orientation within a known scene representation.\nThis task is often complicated by factors such as illumination changes and variations in viewing angles.\nIn this paper, we propose HGSLoc, a novel lightweight, plug-and-play pose optimization framework, which integrates 3D reconstruction with a heuristic refinement strategy to achieve higher pose estimation accuracy.\nSpecifically, we introduce an explicit geometric map for 3D representation and high-fidelity rendering, allowing the generation of high-quality synthesized views to support accurate visual localization. Our method demonstrates a faster rendering speed and higher localization accuracy compared to NeRF-based neural rendering localization approaches. We introduce a heuristic refinement strategy, its efficient optimization capability can quickly locate the target node, while we set the step-level optimization step to enhance the pose accuracy in the scenarios with small errors. With carefully designed heuristic functions, it offers efficient optimization capabilities, enabling rapid error reduction in rough localization estimations. Our method mitigates the dependence on complex neural network models while demonstrating improved robustness against noise and higher localization accuracy in challenging environments, as compared to neural network joint optimization strategies. The optimization framework proposed in this paper introduces novel approaches to visual localization by integrating the advantages of 3D reconstruction and heuristic refinement strategy, which demonstrates strong performance across multiple benchmark datasets, including 7Scenes and DB dataset. The implementation of our method will be made open-source.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "INTRODUCTION",
|
| 9 |
+
"text": "Visual localization is a research direction aimed to determine the pose and orientation of a camera within a known scene by analyzing and processing image data. This technique has significant applications in various fields, such as augmented reality (AR), robot navigation, and autonomous driving. By enabling devices to accurately identify their spatial location in complex 3D environments, visual localization facilitates autonomous navigation, environmental awareness, and real-time interaction. The core objective of visual localization is to estimate the camera\u2019s absolute pose. However, this task is challenging due to factors like illumination changes, dynamic occlusions, and variations in viewing angles, necessitating the development of robust and efficient algorithms to address these complexities.\nTwo major categories of methods in visual localization are Absolute Pose Regression (APR)[1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###] and Scene Coordinate Regression (SCR)[9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]. APR is an end-to-end deep learning approach that directly regresses the camera\u2019s pos from the input image. The key advantages of APR lie in its simplicity and computational efficiency. However, APR exhibits notable limitations, particularly in complex or previously unseen environments, where its generalization capability is weak[12 ###reference_b12###]. In contrast, SCR adopts an indirect strategy for pose estimation. It first predicts the 3D scene coordinates of each image pixel using a deep learning model, followed by the computation of the camera\u2019s pose through spatial transformation of these coordinates. While SCR demonstrates high accuracy and robustness in familiar scenes, it incurs substantial computational costs due to the need to predict a large number of pixel-wise coordinates.\n###figure_1### In this paper, we propose a novel paradigm based on classical visual localization methods, aimed at improving the precision and accuracy of pos estimation in visual localization by integrating 3D reconstruction. Neural Radiance Field (NeRF)[13 ###reference_b13###], a neural network-based 3D scene modeling approach, is capable of synthesizing and rendering high-quality 3D scene images through neural network training. However, NeRF\u2019s pixel-wise training and inference mechanism results in significant computational overhead, limiting its practical applications. In contrast, 3D Gaussian Splatting (3DGS)[14 ###reference_b14###] mitigates this issue by representing scene points as Gaussian distributions, thereby significantly reducing the data processing load during rendering. Furthermore, 3DGS leverages CUDA kernel functions to accelerate training and inference, making it a prominent method in the field of 3D reconstruction. In known or partially known static environments, several approaches, such as 3DGS-ReLoc[15 ###reference_b15###] and GSLoc[16 ###reference_b16###], have been developed. The 3DGS-ReLoc method requires grid search for efficiency in coarse localization using the normalized cross-correlation (NCC)[17 ###reference_b17###] metric, which affects the localization accuracy.The GSLoc method has more steps and also uses MASt3R[18 ###reference_b18###] for assisted localization. Whereas, our method is a lightweight framework that enables efficient positional optimization for any image. As shown in Fig. 1 ###reference_###, by incorporating 3DGS, richer geometric information is available for pose estimation, and through heuristic optimization of coarse pos estimates, the accuracy of localization can be significantly enhanced in complex scenes.\nAbsolute Pose Regression (APR) and Scene Coordinate Regression (SCR) provide coarse pose estimates that serve as a foundation for further refinement. To achieve high-quality scene rendering, we introduce the 3D Gaussian Splatting (3DGS), which enriches the database imagery by constructing a dense point cloud, facilitating more detailed scene reconstruction. Building on this, we employ a heuristic refinement algorithm[19 ###reference_b19###] to optimize the estimated poses. With its efficient pathfinding capabilities, combined with a custom-designed heuristic function, the algorithm efficiently adjusts the rendered view of the current pose to match the query image, resulting in more precise pose alignment. Our modular approach significantly reduces dependence on expensive neural network training, offering a more cost-effective solution compared to deep learning methods typically used for pose optimization. Additionally, our method exhibits strong generalization capabilities, maintaining rapid convergence and substantial improvements in pose accuracy, even in the presence of noisy pose data. This adaptability is particularly valuable in practical applications, as it ensures that the proposed method can be deployed across diverse platforms and data quality levels, providing a robust solution for a wide range of scenarios. The effectiveness of our approach is demonstrated through experiments conducted on several benchmark datasets, including 7Scenes and DB. These results underscore the method\u2019s performance on classical visual localization datasets as well as those related to 3D Gaussian splatting. The contributions of our approach are summarized as follows:\nWe propose a lightweight, plug-and-play pose optimization framework that facilitates efficient pose refinement for any query image.\nWe design a heuristic refinement strategy and set the step-level optimization step to adapt various complex scenes.\nOur proposed framework achieves higher localization accuracy than NeRF-based neural rendering localization approaches [20 ###reference_b20###] and outperforms neural network joint pose optimization strategy in noisy conditions."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II RELATED WORK",
|
| 15 |
+
"text": "In this section, we introduce visual localization methods and 3D Gaussian Splatiing."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Visual localization",
|
| 21 |
+
"text": "PoseNet represents a foundational work in the domain of Absolute Pose Regression (APR)[1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###], pioneering the direct regression of pose from image data using convolutional neural networks (CNNs). Unlike traditional localization techniques, which typically involve intricate feature extraction, matching, and geometric computation, PoseNet[1 ###reference_b1###] introduces an end-to-end framework that seamlessly integrates these steps into a unified neural network learning process. This approach simplifies the mapping of image data to pose estimation, making it highly suitable for visual localization tasks across diverse environments. Building on PoseNet, MS-Transformer[7 ###reference_b7###] enhances performance by incorporating global context modeling, enabling more effective handling of objects and structures at various scales within an image. The introduction of a multi-head self-attention mechanism allows for a better understanding of complex scenes, leading to significant improvements in pose regression accuracy. Likewise, DFNet[6 ###reference_b6###] extends the capabilities of APR by integrating information from multimodal sensors, offering more comprehensive and detailed modeling of visual scenes. This fusion of multimodal data leverages the complementary strengths of different data sources, enhancing robustness and adaptability to various environmental factors. However, despite the advantages offered by APR methods, they remain vulnerable to noise and environmental variability. Under adverse conditions, such as poor lighting, unfavorable weather, or occlusions, the regression models\u2019 accuracy in pose estimation can degrade significantly.\nScene Coordinate Regression (SCR) methods[9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###] estimate camera pose by learning the mapping between image pixels and corresponding 3D scene coordinates. These approaches bypass the complex feature matching procedures characteristic of traditional localization methods, thereby enhancing the efficiency and robustness of pose estimation. DSAC*[9 ###reference_b9###] further advances SCR by introducing a differentiable hypothesis selection mechanism, allowing the model to learn how to choose the optimal pose hypothesis during network training. Additionally, it accommodates both RGB and RGB-D image inputs, incorporating depth map information into the pose estimation process, which enhances the model\u2019s ability to interpret and manage complex scenes. On the other hand, ACE[10 ###reference_b10###] accelerates feature matching by optimizing the encoding and decoding of image coordinates, which enables faster processing. Furthermore, it demonstrates resilience to noise and lighting variations, improving its robustness in dynamic or less controlled environments. By addressing these common challenges, ACE contributes to more reliable pose estimation in scenes where traditional methods may struggle to maintain accuracy."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B 3D Gaussian Splatting",
|
| 27 |
+
"text": "3D Gaussian Splatting (3DGS)[14 ###reference_b14###], an emerging method in 3D reconstruction, has rapidly gained prominence since its introduction. This method significantly accelerates the synthesis of new views by modeling the scene with Gaussian ellipsoids and utilizing advanced rendering methods. Within the realm of 3DGS research, various techniques have enhanced and optimized 3DGS in different aspects, such as quality improvement[21 ###reference_b21###], compression and regularization[22 ###reference_b22###], dynamic 3D reconstruction[23 ###reference_b23###], and handling challenging inputs[24 ###reference_b24###]. The advancement of 3DGS methods not only enhances the quality of scene reconstruction but also speeds up rendering, offering novel and improved approaches for visual localization tasks. For instance, GSLoc leverages rendered images from new viewpoints for matching and pose optimization, while the InstantSplat[25 ###reference_b25###] method, utilizing DUSt3R[26 ###reference_b26###], achieves rapid and high-quality scene reconstruction by jointly optimizing poses with 3D Gaussian parameters. Our proposed method builds upon 3DGS reconstructed scenes and employs heuristic pose optimization to enhance pose accuracy in specific scenarios while preserving the original pose accuracy."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III METHOD",
|
| 33 |
+
"text": "In this section, we outline the fundamental principles of the 3D Gaussian Splatting (3DGS) and heuristic refinement strategy, along with their integrated implementation. An overview of our framework is depicted in Fig. 2 ###reference_###.\n###figure_2###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-A Explicit Geometric Map",
|
| 39 |
+
"text": "3D Gaussian Splatting (3DGS)[14 ###reference_b14###] is a method for representing and rendering three-dimensional scenes. It models the distribution of objects within a scene using 3D Gaussian functions and approximates object surface colors through spherical harmonic coefficients. This method not only delivers an accurate depiction of scene geometry but also effectively captures and renders the lighting and color variations. In 3DGS, each primitive is characterized by a three-dimensional covariance matrix and mean value :\nwhere , represents the rotation, represents the anisotropy scale.\nWhen projecting onto the viewing plane, 3D Gaussian Splatting (3DGS) utilizes a 2D Gaussian directly, rather than performing the axial integral of a 3D Gaussian. This approach addresses the computational challenge of requiring a large number of samples by limiting the computation to the number of Gaussians, thereby enhancing efficiency. The projected 2D covariance matrix and means are and , respectively, where W represents the transformation from the world coordinate system to the camera coordinate system and J denotes the radial approximation of the Jacobian matrix for the projection transformation.\nDuring the rendering phase, spatial depth and tile ID are utilized as key values to sort the Gaussian primitives using GPU-based ordering. Subsequently, the color of each pixel is computed based on the volume rendering formula:\nWhere:\nA major advantage of 3D Gaussian Splatting (3DGS) is its efficient rendering speed. By leveraging CUDA kernel functions for pixel-level parallel processing, 3DGS achieves rapid training and rendering. Additionally, 3DGS employs adaptive control strategies to accommodate objects of various shapes, enhancing both the accuracy and efficiency of rendering. This results in high-quality reconstructed scenes and more realistic new-view images, which provide opportunities for further advancements in pose accuracy."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-B Heuristic Algorithm Implementation",
|
| 45 |
+
"text": "Heuristic approaches[19 ###reference_b19###] are often implemented to path planning and graph search that combines the strengths of depth-first search (DFS) and breadth-first search (BFS). It has been widely applied to various real-world problems, including game development, robot navigation, and geographic information systems (GIS). The primary goal of the heuristic algorithm is to efficiently find the optimal path from an initial node to a goal node, where each node represents a state within the search space. The algorithm relies on an evaluation function, , to prioritize nodes for expansion. This function typically consists of two components:\nWhere function is the actual cost from the start node to the current node; function is the estimated cost from the current node to the target node.\nThe core idea of the heuristic algorithm is to minimize the number of expanded nodes by guiding the search direction using a heuristic function, , while ensuring the least costly path. The heuristic function must satisfy two important properties: Admissibility and Consistency. Admissibility ensures that never overestimates the cost of traveling from node to the target node. Consistency requires that for any node and its neighboring node , the heuristic function satisfies the following condition:\nWhere denotes the actual cost from to , which ensures that the algorithm does not repeatedly return to an already expanded node. The algorithm has Optimality and Completeness, i.e., it is guaranteed to find the most optimal path from the start node to the goal node, and for a finite search space, the algorithm always finds a solution.\nWe use 3DGS as a new-viewpoint image renderer with the goal of finding a more suitable pose within a certain range around the initial pose. A pose is characterized by , where represent quaternion of a rotation and represent translation. We set the rotation and translation variations and , and the current node is transformed to other neighboring nodes by different variations. The pose can be viewed as nodes in the search space, while the transitions between different pose correspond to edges in the graph, and this process can be viewed as expanding nodes in the search graph. In this application, the key to the heuristic algorithm is to design a reasonable cost function. We design the actual cost of a child node as the sum of the actual cost of the current node and the length of the path to the child node, and the estimated cost as the difference value between the rendered image and the query image corresponding to the pose of the current node:\nWhere the represents the current query image and represents the rendering image of current child node.\nThe heuristic function effectively guides the algorithm toward the optimal pose, ultimately identifying the pose that produces a rendered image most similar to the query image. We provide the pseudo-code for the algorithm\u2019s implementation in Tab. I ###reference_###. In this pseudo-code, OpenList is used to store nodes awaiting expansion, while ClosedList contains nodes that have already been expanded.\n###table_1###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "IV Experiment",
|
| 51 |
+
"text": "In this section, we compare and analyze the coarse pose with the optimized pose, including pose accuracy and precision."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "IV-A Implementation",
|
| 57 |
+
"text": "The deep learning framework employed in this work is PyTorch[29 ###reference_b29###]. Each scene is reconstructed using 3D Gaussian Splatting (3DGS) with 30,000 training iterations, running on RTX 4090 GPUs. For the 7Scenes datasets, we adopt the SfM ground truth (GT) provided by [30 ###reference_b30###]."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "IV-B Datasets, Metrics and Baselines",
|
| 63 |
+
"text": "We evaluated our method on two public datasets: 7scenes and Deep Blending. In the case of the 7scenes datasets[31 ###reference_b31###, 32 ###reference_b32###], the official test lists were used as query images, while the remaining images were utilized for training. For the Deep Blending dataset, we specifically selected the drjohnson and playroom scenes, and we constructed a test image set following the 1-out-of-8 approach suggested by Mip-NeRF[33 ###reference_b33###].\nWe show the median rotation and translation error, and also provide the ratio of pose error within 1cm/1\u00b0.\nOur approach builds on an initial coarse pose estimation. For the APR[1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###] framework, we have selected the widely recognized Marepo[8 ###reference_b8###] method as the benchmark for comparison. Similarly, for the SCR[9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###] framework, we have chosen the classical ACE[10 ###reference_b10###] method as the benchmark for comparison."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.3",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "IV-C Analysis of results",
|
| 69 |
+
"text": "For the 7Scenes dataset, we evaluate the performance of Marepo[8 ###reference_b8###] and ACE[10 ###reference_b10###] after incorporating HGSLoc. Tab. II ###reference_### demonstrates that our method effectively reduces the error in the coarse pose estimates obtained from both Marepo and ACE. Compared to other NRP methods, our approach achieves results with smaller relative pose errors. Furthermore, Tab. III ###reference_### presents the ratio of query images with relative pose errors of up to 1 cm and 1\u00b0, showing significant improvements after applying the HGSLoc framework. This indicates that our method efficiently optimizes cases involving small relative pose errors, further enhancing accuracy.\nWe selected two scenes, \u201dplayroom\u201d and \u201ddrjohnson,\u201d for testing. For both the Marepo[8 ###reference_b8###] and ACE[10 ###reference_b10###] methods, we observed that the coarse pose errors were significantly large. This may be attributed to the higher complexity of the DB dataset compared to the 7Scenes datasets, as well as the limited training data, which may have prevented model convergence. Consequently, we utilized an alternative method (HLoc) that leverages point clouds to obtain an initial pose estimate and compared the results. As shown in Tab. IV ###reference_###, the improvement from boosting is not pronounced, likely due to the high image quality of the DB dataset, which already provided relatively accurate preliminary poses with the HLoc framework. To better demonstrate the effectiveness of our pose optimization method, Tab. V ###reference_### introduces various levels of step noise, making the visualization results more intuitive.\n(a) playroom\n(b) drjohnson\nAs shown in Tab. VI ###reference_###, to further demonstrate the effectiveness of our method, we compare it with an alternative joint optimization strategy[25 ###reference_b25###]. For this comparison, a noise level of granularity is introduced to the initial pose. Our method employs heuristic optimization based on high-quality scene reconstruction obtained through the 3DGS[14 ###reference_b14###] method, whereas the alternative strategy jointly optimizes both the scene reconstruction and the initial pose[25 ###reference_b25###].\nBy inputting the pose into the 3D reconstructed scene, we generate a rendered image that visualizes the pose. Each query image corresponds to the GT pose, and the discrepancy between the estimated pose and the GT pose is reflected in the rendered images from various viewpoints. To better observe this error and the improvement achieved through our optimization method, we select viewpoints with significant accuracy improvements for qualitative analysis. Fig. 3 ###reference_### demonstrate that, when using our framework on the 7Scenes datasets, the rendered images more closely match the GT images. Fig. 4 ###reference_###illustrates the results of applying our framework to noisy poses in the DB dataset, showing that our method effectively refines the original pose, resulting in rendered images that closely resemble the GT images.\n###figure_3### ###figure_4### In our method, we use the sum of pixel-by-pixel differences as the heuristic function. To demonstrate the effectiveness of this heuristic function, Tab. VII ###reference_### compares the results obtained using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) as alternative heuristic functions. Higher values of PSNR and SSIM indicate better image quality and structural similarity, whereas we would like to see them take the opposite number as the value of the heuristic function is as small as possible. To illustrate the impact of different heuristic functions more clearly, we applied these comparisons to the DB dataset, which introduces significant noise."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "CONCLUSIONS",
|
| 75 |
+
"text": "In this study, we propose a lightweight, plug-and-play visual localization optimization framework that combines heuristic refinement strategy with 3D reconstruction to significantly enhance pose estimation accuracy, achieving SOTA performance on two datasets. Compared to NeRF-based neural rendering localization methods[20 ###reference_b20###], the proposed approach demonstrates superior rendering speed and enhanced localization accuracy. Through the integration of well-designed heuristic functions, the method efficiently optimizes and rapidly reduces errors in coarse localization estimations. Our modular approach not only reduces reliance on complex neural network training, enhancing the algorithm\u2019s flexibility and practicality, but also demonstrates robust performance in noisy environments, facilitating rapid convergence and higher accuracy. This robustness ensures that the method performs consistently across various platforms and data qualities. In summary, the integration of heuristic refinement strategy with 3D Gaussian distribution offers a novel and effective solution for visual localization, providing a valuable reference for the development and optimization of future visual localization systems."
|
| 76 |
+
}
|
| 77 |
+
],
|
| 78 |
+
"appendix": [],
|
| 79 |
+
"tables": {
|
| 80 |
+
"1": {
|
| 81 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Heuristic pose optimization strategy</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.6.6\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.7.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.6.6.7.1.1\">Heuristic Algorithm</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.8.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.6.6.8.2.1\">while openList is not empty:</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.1.1.1.1\">\u00a0\u00a0\u2003\u20031. pop top node with from openList.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.9.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.6.6.9.3.1\">\u00a0\u00a0\u2003\u20032. if top is destination node:</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.10.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.6.6.10.4.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003break</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.11.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.6.6.11.5.1\">\u00a0\u00a0\u2003\u20033. closeList.push(top)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.12.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.6.6.12.6.1\">\u00a0\u00a0\u2003\u20034. for each child node of top:</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.13.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.6.6.13.7.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003if child in closeList:</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.14.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.6.6.14.8.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003\u2003\u2003continue</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.2.2.2.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003computes the from the start node to child.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.15.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.6.6.15.9.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003if child not in openList:</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.3.3.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003\u2003\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.4.4.4.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003\u2003\u2003\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.16.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.6.6.16.10.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003\u2003\u2003openList.push(child)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.5.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.5.5.5.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003elif :</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.6.6.6.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003\u2003\u2003 = tentative cost</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6.17.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T1.6.6.17.11.1\">\u00a0\u00a0\u2003\u2003\u2003\u2003\u2003\u2003heap adjustments</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 82 |
+
"capture": "TABLE I: Heuristic pose optimization strategy"
|
| 83 |
+
},
|
| 84 |
+
"2": {
|
| 85 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\" style=\"font-size:70%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>We present the results of comparison experiments on the 7Scenes dataset, highlighting the median translation and rotation errors (cm/\u00b0) of the pose relative to the ground truth (GT) pose for various methods across seven scenes. The best results are indicated in bold. \u201dNRP\u201d refers to Neural Render Pose Estimation.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.1\" style=\"padding:0.35pt 11.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.2\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.2.1\" style=\"font-size:70%;\">Method</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.1.1.3\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.3.1\" style=\"font-size:70%;\">chess</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.1.1.4\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.4.1\" style=\"font-size:70%;\">fire</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.1.1.5\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.5.1\" style=\"font-size:70%;\">heads</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.1.1.6\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.6.1\" style=\"font-size:70%;\">office</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.1.1.7\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.7.1\" style=\"font-size:70%;\">pumpkin</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.1.1.8\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.8.1\" style=\"font-size:70%;\">redkitchen</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.9\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.9.1\" style=\"font-size:70%;\">stairs</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.1.1.10\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1.10.1\" style=\"font-size:70%;\">Avg.\u2193[cm/\u00b0]</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.3.2.2.1\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.2.2.1.1\" style=\"font-size:70%;\">APR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.3.2.2.2\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.3.2.2.2.1\" style=\"font-size:70%;\">Marepo</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.3.2.2.2.2.1\" style=\"font-size:70%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.10925v2#bib.bib8\" title=\"\">8</a><span class=\"ltx_text\" id=\"S4.T2.3.2.2.2.3.2\" style=\"font-size:70%;\">]</span></cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.2.3\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.2.2.3.1\" style=\"font-size:70%;\">1.9/0.83</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.2.4\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.2.2.4.1\" style=\"font-size:70%;\">2.3/0.91</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.2.5\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.2.2.5.1\" style=\"font-size:70%;\">2.2/1.27</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.2.6\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.2.2.6.1\" style=\"font-size:70%;\">2.8/0.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.2.7\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.2.2.7.1\" style=\"font-size:70%;\">2.5/0.88</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.2.8\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.2.2.8.1\" style=\"font-size:70%;\">3.0/0.99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.2.2.9\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.2.2.9.1\" style=\"font-size:70%;\">5.8/1.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.2.10\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.2.2.10.1\" style=\"font-size:70%;\">2.9/1.04</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.3.1\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.3.3.1.1\" style=\"font-size:70%;\">SCR</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.3.2\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.3.3.3.2.1\" style=\"font-size:70%;\">ACE</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.3.3.3.2.2.1\" style=\"font-size:70%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.10925v2#bib.bib10\" title=\"\">10</a><span class=\"ltx_text\" id=\"S4.T2.3.3.3.2.3.2\" style=\"font-size:70%;\">]</span></cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.3\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.3.3.3.1\" style=\"font-size:70%;\">0.6/0.18</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.4\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.3.3.4.1\" style=\"font-size:70%;\">0.8/0.31</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.5\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.3.3.5.1\" style=\"font-size:70%;\">0.6/0.33</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.6\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.3.3.6.1\" style=\"font-size:70%;\">1.1/0.28</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.7\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.3.3.7.1\" style=\"font-size:70%;\">1.2/0.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.8\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.3.3.3.8.1\" style=\"font-size:70%;\">0.8/</span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.3.8.2\" style=\"font-size:70%;\">0.20</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.3.9\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.3.3.9.1\" style=\"font-size:70%;\">2.9/0.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.3.10\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.3.3.10.1\" style=\"font-size:70%;\">1.1/0.33</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.3.4.4.1\" rowspan=\"4\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.4.4.1.1\" style=\"font-size:70%;\">NRP</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.3.4.4.2\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.3.4.4.2.1\" style=\"font-size:70%;\">HR-APR</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.3.4.4.2.2.1\" style=\"font-size:70%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.10925v2#bib.bib27\" title=\"\">27</a><span class=\"ltx_text\" id=\"S4.T2.3.4.4.2.3.2\" style=\"font-size:70%;\">]</span></cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.4.4.3\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.4.4.3.1\" style=\"font-size:70%;\">2.0/0.55</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.4.4.4\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.4.4.4.1\" style=\"font-size:70%;\">2.0/0.75</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.4.4.5\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.4.4.5.1\" style=\"font-size:70%;\">2.0/1.45</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.4.4.6\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.4.4.6.1\" style=\"font-size:70%;\">2.0/0.64</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.4.4.7\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.4.4.7.1\" style=\"font-size:70%;\">2.0/0.62</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.4.4.8\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.4.4.8.1\" style=\"font-size:70%;\">2.0/0.67</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.4.4.9\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.4.4.9.1\" style=\"font-size:70%;\">5.0/1.30</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.4.4.10\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.4.4.10.1\" style=\"font-size:70%;\">2.4/0.85</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.3.5.5.1\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.3.5.5.1.1\" style=\"font-size:70%;\">NeRFMatch</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.3.5.5.1.2.1\" style=\"font-size:70%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.10925v2#bib.bib28\" title=\"\">28</a><span class=\"ltx_text\" id=\"S4.T2.3.5.5.1.3.2\" style=\"font-size:70%;\">]</span></cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.5.2\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.5.5.2.1\" style=\"font-size:70%;\">0.9/0.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.5.3\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.5.5.3.1\" style=\"font-size:70%;\">1.3/0.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.5.4\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.5.5.4.1\" style=\"font-size:70%;\">1.6/1.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.5.5\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.5.5.5.1\" style=\"font-size:70%;\">3.3/0.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.5.6\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.5.5.6.1\" style=\"font-size:70%;\">3.2/0.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.5.7\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.5.5.7.1\" style=\"font-size:70%;\">1.3/0.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.3.5.5.8\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.5.5.8.1\" style=\"font-size:70%;\">7.2/1.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.5.9\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.5.5.9.1\" style=\"font-size:70%;\">2.7/0.70</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.3.6.6.1\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.6.6.1.1\" style=\"font-size:70%;\">Marepo+HGSLoc</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.6.6.2\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.6.6.2.1\" style=\"font-size:70%;\">1.5/0.68</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.6.6.3\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.6.6.3.1\" style=\"font-size:70%;\">1.4/0.62</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.6.6.4\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.6.6.4.1\" style=\"font-size:70%;\">1.5/0.92</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.6.6.5\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.6.6.5.1\" style=\"font-size:70%;\">2.7/0.80</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.6.6.6\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.6.6.6.1\" style=\"font-size:70%;\">1.8/0.46</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.6.6.7\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.6.6.7.1\" style=\"font-size:70%;\">2.2/0.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.3.6.6.8\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.6.6.8.1\" style=\"font-size:70%;\">4.8/1.34</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.6.6.9\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.6.6.9.1\" style=\"font-size:70%;\">2.3/0.78</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S4.T2.3.7.7.1\" style=\"padding:0.35pt 11.5pt;\"><span class=\"ltx_text\" id=\"S4.T2.3.7.7.1.1\" style=\"font-size:70%;\">ACE+HGSLoc</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.7.7.2\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.2.1\" style=\"font-size:70%;\">0.5</span><span class=\"ltx_text\" id=\"S4.T2.3.7.7.2.2\" style=\"font-size:70%;\">/</span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.2.3\" style=\"font-size:70%;\">0.17</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.7.7.3\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.3.1\" style=\"font-size:70%;\">0.6</span><span class=\"ltx_text\" id=\"S4.T2.3.7.7.3.2\" style=\"font-size:70%;\">/</span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.3.3\" style=\"font-size:70%;\">0.25</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.7.7.4\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.4.1\" style=\"font-size:70%;\">0.5</span><span class=\"ltx_text\" id=\"S4.T2.3.7.7.4.2\" style=\"font-size:70%;\">/</span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.4.3\" style=\"font-size:70%;\">0.29</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.7.7.5\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.5.1\" style=\"font-size:70%;\">1.0</span><span class=\"ltx_text\" id=\"S4.T2.3.7.7.5.2\" style=\"font-size:70%;\">/</span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.5.3\" style=\"font-size:70%;\">0.25</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.7.7.6\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.6.1\" style=\"font-size:70%;\">1.1</span><span class=\"ltx_text\" id=\"S4.T2.3.7.7.6.2\" style=\"font-size:70%;\">/</span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.6.3\" style=\"font-size:70%;\">0.21</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.7.7.7\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.7.1\" style=\"font-size:70%;\">0.7</span><span class=\"ltx_text\" id=\"S4.T2.3.7.7.7.2\" style=\"font-size:70%;\">/</span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.7.3\" style=\"font-size:70%;\">0.20</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.3.7.7.8\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.8.1\" style=\"font-size:70%;\">2.8</span><span class=\"ltx_text\" id=\"S4.T2.3.7.7.8.2\" style=\"font-size:70%;\">/</span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.8.3\" style=\"font-size:70%;\">0.69</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.7.7.9\" style=\"padding:0.35pt 11.5pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.9.1\" style=\"font-size:70%;\">1.0</span><span class=\"ltx_text\" id=\"S4.T2.3.7.7.9.2\" style=\"font-size:70%;\">/</span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.7.7.9.3\" style=\"font-size:70%;\">0.29</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 86 |
+
"capture": "TABLE II: We present the results of comparison experiments on the 7Scenes dataset, highlighting the median translation and rotation errors (cm/\u00b0) of the pose relative to the ground truth (GT) pose for various methods across seven scenes. The best results are indicated in bold. \u201dNRP\u201d refers to Neural Render Pose Estimation."
|
| 87 |
+
},
|
| 88 |
+
"3": {
|
| 89 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:70%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>We present the average percentage of pose errors within 1 cm and 1\u00b0 on the 7Scenes dataset. \u201dNRP\u201d denotes neural render pose estimation.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.1.1\" style=\"padding:0.35pt 21.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.1.2\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.1.1.2.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Methods</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.3.1.1.3\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.1.1.3.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Avg.\u2191[1cm,1\u00b0]</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.3.2.1.1\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.2.1.1.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0APR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.3.2.1.2\" style=\"padding:0.35pt 21.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T3.3.2.1.2.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Marepo</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T3.3.2.1.2.2.1\" style=\"font-size:70%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.10925v2#bib.bib8\" title=\"\">8</a><span class=\"ltx_text\" id=\"S4.T3.3.2.1.2.3.2\" style=\"font-size:70%;\">]</span></cite><span class=\"ltx_text\" id=\"S4.T3.3.2.1.2.4\" style=\"font-size:70%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.2.1.3\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.2.1.3.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a06.2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.2.1\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.3.2.1.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0SCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.3.2.2\" style=\"padding:0.35pt 21.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T3.3.3.2.2.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ACE</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T3.3.3.2.2.2.1\" style=\"font-size:70%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.10925v2#bib.bib10\" title=\"\">10</a><span class=\"ltx_text\" id=\"S4.T3.3.3.2.2.3.2\" style=\"font-size:70%;\">]</span></cite><span class=\"ltx_text\" id=\"S4.T3.3.3.2.2.4\" style=\"font-size:70%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.2.3\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.3.2.3.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a053.7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.4.3.1\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.4.3.1.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0NRP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.4.3.2\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.4.3.2.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Marepo+HGSLoc</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.4.3.3\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.4.3.3.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a019.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.5.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.3.5.4.1\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.5.4.1.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0NRP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.3.5.4.2\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.5.4.2.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ACE+HGSLoc</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.3.5.4.3\" style=\"padding:0.35pt 21.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.3.5.4.3.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a059.1</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 90 |
+
"capture": "TABLE III: We present the average percentage of pose errors within 1 cm and 1\u00b0 on the 7Scenes dataset. \u201dNRP\u201d denotes neural render pose estimation."
|
| 91 |
+
},
|
| 92 |
+
"4": {
|
| 93 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:70%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span> We present the median translation and rotation errors (cm/\u00b0) for both the initial estimated pose and the optimized pose relative to the GT pose.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.3.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.1.1\" style=\"padding:0.35pt 24.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T4.3.1.1.2\" style=\"padding:0.35pt 24.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.3.1.1.2.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0init error</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.3.1.1.3\" style=\"padding:0.35pt 24.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.3.1.1.3.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0refine error</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.3.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.2.1.1\" style=\"padding:0.35pt 24.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.3.2.1.1.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0playrroom</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.3.2.1.2\" style=\"padding:0.35pt 24.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.3.2.1.2.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.7/0.060</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.3.2.1.3\" style=\"padding:0.35pt 24.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.3.2.1.3.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.6/0.059</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.3.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.3.3.2.1\" style=\"padding:0.35pt 24.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.3.3.2.1.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0drjohnson</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T4.3.3.2.2\" style=\"padding:0.35pt 24.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.3.3.2.2.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.3/0.055</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T4.3.3.2.3\" style=\"padding:0.35pt 24.0pt;\"><span class=\"ltx_text\" id=\"S4.T4.3.3.2.3.1\" style=\"font-size:70%;\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a00.3/0.054</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 94 |
+
"capture": "TABLE IV: We present the median translation and rotation errors (cm/\u00b0) for both the initial estimated pose and the optimized pose relative to the GT pose."
|
| 95 |
+
},
|
| 96 |
+
"5": {
|
| 97 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:70%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE V: </span>We show the median translation and rotation error (m/\u00b0) for the poses with noise and for the poses after optimization. (q2, t1) denotes the introduction of noise at the percentile of qvec, decile of tvec, and the rest is the same.</figcaption><div class=\"ltx_flex_figure\">\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<p class=\"ltx_p ltx_figure_panel ltx_align_center\" id=\"S4.T5.3\"><span class=\"ltx_text\" id=\"S4.T5.3.1\" style=\"font-size:70%;\">(a) playroom</span></p>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T5.4.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.4.1.1.1\" style=\"padding:0.35pt 13.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.4.1.1.2\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.1.1.2.1\" style=\"font-size:70%;\">noise error</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.4.1.1.3\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.1.1.3.1\" style=\"font-size:70%;\">refine error</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.4.1.1.4\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.1.1.4.1\" style=\"font-size:70%;\">tvec\u2191</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.4.1.1.5\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.1.1.5.1\" style=\"font-size:70%;\">qvec\u2191</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.4.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.4.2.1.1\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.2.1.1.1\" style=\"font-size:70%;\">q2, t1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.4.2.1.2\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.2.1.2.1\" style=\"font-size:70%;\">0.81/7.79</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.4.2.1.3\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.2.1.3.1\" style=\"font-size:70%;\">0.33/2.83</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.4.2.1.4\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.2.1.4.1\" style=\"font-size:70%;\">59.3%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.4.2.1.5\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.2.1.5.1\" style=\"font-size:70%;\">63.7%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.4.3.2.1\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.3.2.1.1\" style=\"font-size:70%;\">q2, t2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.4.3.2.2\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.3.2.2.1\" style=\"font-size:70%;\">0.31/8.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.4.3.2.3\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.3.2.3.1\" style=\"font-size:70%;\">0.16/1.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.4.3.2.4\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.3.2.4.1\" style=\"font-size:70%;\">48.4%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.3.2.5\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.3.2.5.1\" style=\"font-size:70%;\">78.5%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T5.4.4.3.1\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.4.3.1.1\" style=\"font-size:70%;\">q3, t3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T5.4.4.3.2\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.4.3.2.1\" style=\"font-size:70%;\">0.03/0.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T5.4.4.3.3\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.4.3.3.1\" style=\"font-size:70%;\">0.02/0.26</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T5.4.4.3.4\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.4.3.4.1\" style=\"font-size:70%;\">33.3%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.4.4.3.5\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.4.4.3.5.1\" style=\"font-size:70%;\">67.9%</span></td>\n</tr>\n</tbody>\n</table>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<p class=\"ltx_p ltx_figure_panel ltx_align_center\" id=\"S4.T5.5\"><span class=\"ltx_text\" id=\"S4.T5.5.1\" style=\"font-size:70%;\">(b) drjohnson</span></p>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_centering ltx_figure_panel ltx_guessed_headers ltx_align_middle\" id=\"S4.T5.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T5.6.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.6.1.1.1\" style=\"padding:0.35pt 13.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.6.1.1.2\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.1.1.2.1\" style=\"font-size:70%;\">noise error</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.6.1.1.3\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.1.1.3.1\" style=\"font-size:70%;\">refine error</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T5.6.1.1.4\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.1.1.4.1\" style=\"font-size:70%;\">tvec\u2191</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T5.6.1.1.5\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.1.1.5.1\" style=\"font-size:70%;\">qvec\u2191</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T5.6.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.6.2.1.1\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.2.1.1.1\" style=\"font-size:70%;\">q2, t1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.6.2.1.2\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.2.1.2.1\" style=\"font-size:70%;\">0.68/7.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.6.2.1.3\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.2.1.3.1\" style=\"font-size:70%;\">0.15/1.87</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.6.2.1.4\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.2.1.4.1\" style=\"font-size:70%;\">77.9%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.6.2.1.5\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.2.1.5.1\" style=\"font-size:70%;\">76.1%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.6.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.6.3.2.1\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.3.2.1.1\" style=\"font-size:70%;\">q2, t2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.6.3.2.2\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.3.2.2.1\" style=\"font-size:70%;\">0.33/7.86</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.6.3.2.3\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.3.2.3.1\" style=\"font-size:70%;\">0.13/2.21</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.6.3.2.4\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.3.2.4.1\" style=\"font-size:70%;\">60.6%</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.6.3.2.5\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.3.2.5.1\" style=\"font-size:70%;\">71.9%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.6.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T5.6.4.3.1\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.4.3.1.1\" style=\"font-size:70%;\">q3, t3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T5.6.4.3.2\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.4.3.2.1\" style=\"font-size:70%;\">0.03/0.72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T5.6.4.3.3\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.4.3.3.1\" style=\"font-size:70%;\">0.01/0.21</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T5.6.4.3.4\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.4.3.4.1\" style=\"font-size:70%;\">66.7%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T5.6.4.3.5\" style=\"padding:0.35pt 13.0pt;\"><span class=\"ltx_text\" id=\"S4.T5.6.4.3.5.1\" style=\"font-size:70%;\">70.8%</span></td>\n</tr>\n</tbody>\n</table>\n</div>\n</div>\n</figure>",
|
| 98 |
+
"capture": "TABLE V: We show the median translation and rotation error (m/\u00b0) for the poses with noise and for the poses after optimization. (q2, t1) denotes the introduction of noise at the percentile of qvec, decile of tvec, and the rest is the same."
|
| 99 |
+
},
|
| 100 |
+
"6": {
|
| 101 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T6\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:70%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE VI: </span> We show the median translation and rotation error (m/\u00b0) for heuristic optimization and joint optimization strategies.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T6.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T6.3.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T6.3.1.1.1\" style=\"padding:0.35pt 15.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T6.3.1.1.2\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.1.1.2.1\" style=\"font-size:70%;\">init error</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T6.3.1.1.3\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.1.1.3.1\" style=\"font-size:70%;\">joint error</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T6.3.1.1.4\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.1.1.4.1\" style=\"font-size:70%;\">heuristic error</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T6.3.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T6.3.2.1.1\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.2.1.1.1\" style=\"font-size:70%;\">playrroom</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T6.3.2.1.2\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.2.1.2.1\" style=\"font-size:70%;\">0.03/0.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T6.3.2.1.3\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.2.1.3.1\" style=\"font-size:70%;\">0.02/0.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T6.3.2.1.4\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.2.1.4.1\" style=\"font-size:70%;\">0.02/0.26</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T6.3.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T6.3.3.2.1\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.3.2.1.1\" style=\"font-size:70%;\">drjohnson</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T6.3.3.2.2\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.3.2.2.1\" style=\"font-size:70%;\">0.03/0.72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T6.3.3.2.3\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.3.2.3.1\" style=\"font-size:70%;\">0.02/0.47</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T6.3.3.2.4\" style=\"padding:0.35pt 15.0pt;\"><span class=\"ltx_text\" id=\"S4.T6.3.3.2.4.1\" style=\"font-size:70%;\">0.01/0.21</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 102 |
+
"capture": "TABLE VI: We show the median translation and rotation error (m/\u00b0) for heuristic optimization and joint optimization strategies."
|
| 103 |
+
},
|
| 104 |
+
"7": {
|
| 105 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T7\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE VII: </span>We show the median translation and rotation error (m/\u00b0) for poses with noise and for poses after optimization using different heuristic functions.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T7.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T7.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T7.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T7.1.1.1.2\">noise error</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T7.1.1.1.3\">H(Sum of Diff)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T7.1.1.1.4\">H(PSNR)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T7.1.1.1.5\">H(SSIM)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T7.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T7.1.2.1.1\">playrroom</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T7.1.2.1.2\">0.81/7.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T7.1.2.1.3\">0.33/2.83</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T7.1.2.1.4\">0.76/6.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T7.1.2.1.5\">0.87/6.83</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T7.1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T7.1.3.2.1\">drjohnson</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T7.1.3.2.2\">0.68/7.81</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T7.1.3.2.3\">0.15/1.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T7.1.3.2.4\">0.60/6.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T7.1.3.2.5\">0.65/7.59</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 106 |
+
"capture": "TABLE VII: We show the median translation and rotation error (m/\u00b0) for poses with noise and for poses after optimization using different heuristic functions."
|
| 107 |
+
}
|
| 108 |
+
},
|
| 109 |
+
"image_paths": {
|
| 110 |
+
"1": {
|
| 111 |
+
"figure_path": "2409.10925v2_figure_1.png",
|
| 112 |
+
"caption": "Figure 1: HGSLoc significantly reduces the error between the coarse pose and the GT, and exhibits strong noise resistance.",
|
| 113 |
+
"url": "http://arxiv.org/html/2409.10925v2/extracted/5869584/graph1.png"
|
| 114 |
+
},
|
| 115 |
+
"2": {
|
| 116 |
+
"figure_path": "2409.10925v2_figure_2.png",
|
| 117 |
+
"caption": "Figure 2: Overview of HGSLoc. Coarse pose estimates are generated by a pre-trained pose estimator, while high-quality reconstructed scenes are obtained through Gaussian densification. The rendered image of the coarse pose in the scene differs significantly from the query image. After applying the heuristic optimization algorithm, the rendered image aligns much more closely with the query image, resulting in a more accurate pose estimate.",
|
| 118 |
+
"url": "http://arxiv.org/html/2409.10925v2/extracted/5869584/graph2.png"
|
| 119 |
+
},
|
| 120 |
+
"3": {
|
| 121 |
+
"figure_path": "2409.10925v2_figure_3.png",
|
| 122 |
+
"caption": "Figure 3: HGSLoc demonstrates a significant optimization effect on the coarse poses obtained using the ACE and Marepo methods. Each subimage is divided by a diagonal line: the rendered image from the pose is shown in the bottom left part, while the GT image is shown in the top right part. The rendered images corresponding to the ACE and Marepo methods exhibit substantial misalignment with the GT images. To facilitate a clearer comparison, we provide a zoomed-in view of the image, highlighted within the red box.",
|
| 123 |
+
"url": "http://arxiv.org/html/2409.10925v2/extracted/5869584/graph3.png"
|
| 124 |
+
},
|
| 125 |
+
"4": {
|
| 126 |
+
"figure_path": "2409.10925v2_figure_4.png",
|
| 127 |
+
"caption": "Figure 4: Each subimage is divided by a diagonal line, with the image rendered by the estimated pose on the lower left and the GT image on the upper right. The diagonal lines in the optimized comparison image appear less distinct, reflecting improved alignment with the GT image. HGSLoc demonstrates its effectiveness in refining pose estimation, achieving precise values while mitigating the impact of band noise.",
|
| 128 |
+
"url": "http://arxiv.org/html/2409.10925v2/extracted/5869584/graph5.png"
|
| 129 |
+
}
|
| 130 |
+
},
|
| 131 |
+
"validation": true,
|
| 132 |
+
"references": [],
|
| 133 |
+
"url": "http://arxiv.org/html/2409.10925v2"
|
| 134 |
+
}
|
20240921/2409.13952v1.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2409.13953v1.json
ADDED
|
@@ -0,0 +1,501 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Training Large ASR Encoders with Differential Privacy",
|
| 3 |
+
"abstract": "Self-supervised learning (SSL) methods for large speech models have proven to be highly effective at ASR. With the interest in public deployment of large pre-trained models, there is a rising concern for unintended memorization and leakage of sensitive data points from the training data. In this paper, we apply differentially private (DP) pre-training to a SOTA Conformer-based encoder, and study its performance on a downstream ASR task assuming the fine-tuning data is public. This paper is the first to apply DP to SSL for ASR, investigating the DP noise tolerance of the BEST-RQ pre-training method. Notably, we introduce a novel variant of model pruning called gradient-based layer freezing that provides strong improvements in privacy-utility-compute trade-offs. Our approach yields a LibriSpeech test-clean/other WER (%) of 3.78/ 8.41 with (, )-DP for extrapolation towards low dataset scales, and 2.81/ 5.89 with (, )-DP for extrapolation towards high scales.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Across various sub-fields of Machine Learning (ML), large scale transformer-based models [1 ###reference_b1###, 2 ###reference_b2###] have seen widespread adoption for modeling long-range dependencies in sequences. In automatic speech recognition (ASR), the known success of convolutions [3 ###reference_b3###] prompted the introduction of the Conformer architecture [4 ###reference_b4###], and later incorporation of BERT-style self-supervised learning (SSL) via the BEST-RQ pre-training method [5 ###reference_b5###]. Popular ASR models are often released as modifiable checkpoints after being pre-trained on thousands of hours of crawled user-spoken utterances. Following this paradigm of pre-training ASR encoders on massive amount of data can put the model at risk for leaking sensitive information, especially when the data consist of web crawls that can contain sensitive information such as gender, dialect or identity of a speaker.\nIt is well-known that ML models can leak sensitive information about their training dataset, even when the data is kept private. This has been extensively discussed by works such as [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###] in natural language processing (NLP) and computer vision (CV) and later extended to the speech domain by [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###]. This paper explores methods focused on differentially private (DP) pre-training of ASR encoders (illustration in Figure 1 ###reference_###) for mitigating the privacy leakage from trained encoders.\n###figure_1### Differential Privacy [16 ###reference_b16###] provides a robust way to combat the privacy leakage issue. It provides theoretical guarantees about the limits of influence of any individual training point towards the final model, preventing attackers from confidently inferring whether any particular data sample was used for training. For training large models, DP training is challenging due to the the stringent trade-off between privacy, utility and compute (shortened as trade-offs in this paper).\nA standard mechanism of ensuring DP during model training is the addition of noise to the gradients, which can increase privacy but negatively affect model performance. Increasing batch size [17 ###reference_b17###, 18 ###reference_b18###] can mitigate this trade-off but increases compute costs. Recent works in language modeling and vision have demonstrated the utility of DP methods being close to their non-private baselines [19 ###reference_b19###, 20 ###reference_b20###]. Most works focus on improving trade-offs for fine-tuning, with positive effects seen for parameter efficient techniques such as LoRA [21 ###reference_b21###]. LoRA mitigates the issue of growing DP noise magnitude as the model size is increased [22 ###reference_b22###, 19 ###reference_b19###]. Existing privacy literature lacks comprehensive evaluation of methods for reducing the trade-off between privacy and accuracy in pre-training language models, despite extensive improvement of trade-offs observed for fine-tuning.\nThis paper focuses on DP during pre-training (Figure 1 ###reference_###), where new challenges arise from adding substantial DP noise for full model training. Recent work in language modeling [23 ###reference_b23###] has successfully narrowed the gap between private DP pre-training and the non-private baseline by employing private tokenization and increased compute. More recently, [18 ###reference_b18###] explored improving trade-offs in the context of DP Federated Learning (FL) for ASR by utilizing per-layer clipping [24 ###reference_b24###].\nTo date, no research has evaluated training with DP in the SSL setting for ASR. This paper makes the following contributions: 1) We are the first to assess the DP noise tolerance for the BEST-RQ setting of a large Conformer model, and 2) We introduce a novel variant of model pruning called gradient-based layer freezing where we determine the model layers to freeze based on a square of gradient analysis. Collectively, our proposed approach achieves significant improvements in utility (Word Error Rate i.e. WER in our case), while maintaining strong privacy guarantees of ."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Background and Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Differential Privacy",
|
| 21 |
+
"text": "Differential Privacy (DP) [16 ###reference_b16###] is widely considered a gold standard for bounding and quantifying the privacy leakage of sensitive data when performing learning tasks. Intuitively, DP prevents an adversary from confidently making any conclusions about whether any particular data was used in training a model, even while having access to the model and arbitrary external side information. The formal definition of DP depends on the notion of neighboring datasets: we will refer to a pair of datasets as neighbors if can be obtained from by adding or removing one data sample.\nA (randomized) algorithm is (, )-differentially private if for all pairs of neighboring datasets , and for any we have,\nTypical recommendations for and are to be as small as possible, as is the multiplicative factor between the probabilities of the two neighboring datasets and is the additive scalar which controls the strength of the relaxation from the stricter -DP definition [16 ###reference_b16###]. The general recommendation in the literature is to choose where is the number of records in the dataset [25 ###reference_b25###]. [26 ###reference_b26###] recommend different tiers for values going from strong formal guarantees to reasonable and weak guarantees, where Tier1 , Tier2 and Tier 3 ."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Training with Differential Privacy",
|
| 27 |
+
"text": "DP can be integrated at various stages of the model lifecycle [26 ###reference_b26###], and this paper focuses on applying DP during the model pre-training stage. In this set-up, the pre-training data is kept private, the model is trained using a noise additive technique such as DP-SGD [27 ###reference_b27###] and the model can be released publicly along with its parameter weights for public fine-tuning. Due to the post-processing property of DP, any modifications to the released model (such as public fine-tuning) hold the same theoretical guarantees over the pre-training data.\nTypically, differentially private training is performed using variants of DP-SGD [27 ###reference_b27###], where the main distinctions from non-private training are the clipping of per-example gradients, and the addition of spherical Gaussian noise, as illustrated (for ASR pre-training) by Figure 1 ###reference_###. Note that the magnitude of Gaussian noise (called noise multiplier) is directly correlated with the value of , calculated using the chosen privacy accounting technique such as the one by [27 ###reference_b27###]. This is implemented as a modification to the gradient computation during the optimization step by computing per-example gradients [28 ###reference_b28###], clipping to limit their per-sample sensitivity, and the addition of calibrated Gaussian noise. Therefore, DP training is relatively independent of the exact choice of optimizer. For our experiments, we rely on the Adam optimizer with DP modifications for example-level DP. Training with DP incurs several challenges as a result of clipping and addition of noise, commonly characterized as privacy-utility-compute trade-offs (truncated as trade-offs in this paper)."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Related Work",
|
| 33 |
+
"text": "Many works [22 ###reference_b22###, 17 ###reference_b17###, 29 ###reference_b29###] have shown that the trade-offs are substantial for training large neural networks with state-of-the-art techniques like DP-SGD [22 ###reference_b22###, 27 ###reference_b27###]. Consequently, there has been work [27 ###reference_b27###, 30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###] on pre-training using public data for improving the utility of DP-SGD.\nA recent work [18 ###reference_b18###] has considered DP training for ASR models, but focusing on the Federated Learning (FL) regime. Additionally, many works [33 ###reference_b33###, 19 ###reference_b19###, 20 ###reference_b20###] have focused on privately fine-tuning neural networks (focusing largely on vision and language models (LMs)) after pre-training using public data to improve the trade-offs for DP-SGD. While it is common in literature to treat pre-training data as public, modern large model pre-training can involve sensitive data that is susceptible to be memorized and potentially leaked. There is only one recent work [23 ###reference_b23###] that studies DP pre-training for LMs, and demonstrates that such models can be fine-tuned to high accuracies on downstream tasks. Related to modifications on bounding sensitivity within a training step, [34 ###reference_b34###] have considered the role of gradient clipping and suggest model pruning as a strategy to improve the trade-offs."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "Challenges",
|
| 39 |
+
"text": "DP during pre-training remains a relatively unexplored area, and it is unclear whether commonly used fine-tuning techniques directly applied in this context. We devise novel techniques inspired by prior works, and demonstrate their effectiveness during pre-training. The computationally intensive training process, requiring updates to most model parameters, limits quick exploration and prototyping. To address this, we expanded our experimental exploration by evaluating the model continuously during a stable and early pre-trained checkpoint, confirming that comparisons remain valid during later stages of pre-training. This approach enables us to rigorously evaluate early research ideas and maintain a rapid prototyping pace for optimizing the privacy-utility-compute trade-offs during pre-training, contributing to the advancement of privacy-preserving SSL models for ASR."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Experimental Setup",
|
| 45 |
+
"text": "For our model, we choose the 300M variant [12 ###reference_b12###] of the state-of-the art ASR model architecture, Conformer XL [35 ###reference_b35###]. The encoder is pre-trained on LibriLight (LL) [36 ###reference_b36###] for 1M steps using self-supervised learning via the BERT-based Speech pre-Training\nwith Random-projection Quantizer (BEST-RQ) [5 ###reference_b5###]. Fine-tuning is done for 60k steps post attaching an additional projection layer on the encoder, using the LibriSpeech (LS) [37 ###reference_b37###] dataset.\nHyperparameter details and model architecture follow the BEST-RQ paper [5 ###reference_b5###], and official dataset splits were used for training, validation and hyperparameter tuning. Pre-training takes ~1 week on Dragonfish TPUs with 8x8 topology, fine-tuning takes 1 day and original batch size was set at 512.\nPractically, DP training involves adding spherical Gaussian noise calculated using popular privacy accounting techniques like [27 ###reference_b27###]. Most related works target Tier 1 or Tier 2 privacy guarantees with 10 [26 ###reference_b26###]. Privacy accounting techniques consider various factors such as target and , dataset size, minibatch size and training epochs to determine the Gaussian noise multiplier added to the gradients during training. Throughout this paper, we will closely correlate the noise multiplier with our target of 10 to demonstrate strong privacy guarantees.\nIn this paper, we apply DP to the pre-training stage of our model (with LL), and assume that the fine-tuning dataset (LS) for the downstream ASR task is public.\nUtility is reported as test-clean/other WER on the LS dataset.\nWe use the updated moments accountant [27 ###reference_b27###, 38 ###reference_b38###] for calculating our privacy guarantees.\nWe report experiments with different DP noise multipliers in the range , since we find that noise multipliers beyond lead to divergence (more details in Section 4.1 ###reference_###).\nSince the trade-offs with large model training can be substantial, we follow the extrapolation strategy similar to recent works [17 ###reference_b17###, 18 ###reference_b18###]. We extrapolate the -DP assuming the training dynamic remains unchanged upon linearly scaling minibatch size and noise multiplier (to maintain the expected signal-to-noise ratio for the gradient update) along with scaling the dataset size (for improved privacy accounting).\nTo evaluate the impact of DP noise multipliers, we experiment with various values and map the corresponding using the moments accountant [27 ###reference_b27###, 38 ###reference_b38###]. We scale up the batch size, noise multiplier, and pre-training dataset size by a constant factor, as illustrated in Figure 2 ###reference_###.\nBased on this scaling strategy, we hypothesize that training with a larger, well-curated dataset of the same distribution would yield similar Word Error Rate (WER) performance while improving privacy accounting.\nThis would allow for a smaller noise multiplier and a stronger guarantee.\nFigure 2 ###reference_### illustrates the positive effects of different scale-up factors on , leading to significant improvements in privacy guarantees.\nTable 1 ###reference_### presents presents the specific scale-up factors for noise multipliers considered in this paper to achieve a DP of at , where is the scaled-up dataset size.\nAccording to recent work [26 ###reference_b26###], such a level of DP can be classified in the \u201cTier 2: Reasonable privacy guarantees\u201d.\n###figure_2###"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Results",
|
| 51 |
+
"text": "In Section 4.1 ###reference_###, we introduce the baseline of (non-privately) pre-training the BEST-RQ 300M model. We detail preliminary modeling changes required to comply with DP training and include an analysis of the amount of DP noise tolerable for minimal performance regression of the model. Next, in Section 4.2 ###reference_### following [18 ###reference_b18###], we incorporate per layer clipping for improved utility and noise tolerance. Lastly, we introduce our gradient-based layer freezing strategy (dubbed as LayerFreeze). Our results denote a synergy between per-layer clipping and our model pruning technique, based on the compounding improvements we observe in model quality (summary of results in Table 3 ###reference_###)."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Noise tolerance of the BEST-RQ 300M model",
|
| 57 |
+
"text": "We establish non-private baselines for the BEST-RQ 300M model, and analyze the degree of noise tolerated for minimal utility regression. As is typical for DP training, we replace batch normalization with group normalization to effectively limit per-sample contributions and avoiding mixing of batch statistics across samples [26 ###reference_b26###]. After experimentation, we find the best setting of group normalization to have input rank of 3, number of groups as 1 and group norm epsilon as , resulting in a test-clean/other WER (%) of 2.17/4.23 post fine-tuning on LibriSpeech.\nThen, we experiment with choices for per-example clipping bounds, and find the bound 1.5 to be clipping almost all samples during training while providing minimal loss in performance, resulting in a WER of 2.21/4.29. We refer to this as the non-private lower bound result. Thus, the non-private baseline we report for BEST-RQ consists of group normalization and per-example clipping, to offer a direct comparison to the level of additive noise in our experiments. Our results for the non-private baseline, and for differing level of DP noise are reported in Table 2 ###reference_###.\nNote that the performance of the model with fine-tuning from random initialization (no pre-training) is a WER of 4.43/ 11.23, which is the upper bound for effectively measuring the positive effects of pre-training. We refer to this result as the no pre-train upper bound, which is effectively the same as not applying BEST-RQ style pre-training to ConformerXL and just doing supervised training on the Librispeech dataset.\nAs can be seen from Table 2 ###reference_###, we start seeing significant regressions (greater than 10% relative) for noise multiplier , where the standard extrapolation technique achieves DP only at a practically prohibitive scale-up factor of 1070 (Table 1 ###reference_###).\nFor reference, extrapolation factor for DP from noise multiplier is as low as 52, though with the current approach we get WER of 15.38/29.62 which is higher than the no pre-train upper bound.\nOur focus in the rest of the paper is to improve trade-offs for the settings with larger noise multipliers in the range ."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Improving the noise tolerance via warm-starting, per-layer clipping and gradient-based layer freezing",
|
| 63 |
+
"text": "Recent studies have made significant strides in optimizing DP training, and we incorporated these findings into our experimental design. Ganesh et al. [39 ###reference_b39###] show the importance of public pre-training for private model training, especially with an in-domain public checkpoint. Pelikan et al. [18 ###reference_b18###] revive per-layer clipping and show improvements for DP in the supervised training setting of FL for ASR. A couple of recent works [40 ###reference_b40###, 34 ###reference_b34###] have shed light on the benefits of model pruning for DP training, by minimizing the negative effects of compounding noise affected by the model dimensionality.\nThus, in order to bridge the utility gap with the non-private pre-trained baseline, we consider the following three improvements: warm-starting (WS) using public data, per-layer clipping, and our novel method of gradient-based layer freezing.\nTable 3 ###reference_### summarizes the compounding improvements on the three considered techniques."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2.1",
|
| 67 |
+
"parent_section_id": "4.2",
|
| 68 |
+
"section_name": "4.2.1 Warm-starting using in-domain public data (Public WS)",
|
| 69 |
+
"text": "Following prior work on using in-domain public data for warmstarting DP training [41 ###reference_b41###, 39 ###reference_b39###], we randomly selected 1% of the LibriLight (LL) train dataset as a surrogate for a small amount of available in-domain public data.\nFurther, for improved trade-offs, we conduct the DP pre-training on the entire LL train dataset (i.e., samples in the 1% public partition are incorporated into the private training dataset, providing a marginal improvement in the privacy accounting).\nFine-tuning with LibriSpeech (LS) after only (non-private) pre-training with 1% LL yields a WER of 3.88/8.94. Note that this is better than our no pre-train upper bound of 4.43/11.23, but still substantially worse than the non-private lower bound of 2.21/4.29, validating the assumption about only a small amount on in-distribution public data being available in practical scenarios.\nWe present the results with public warmstart in the second column in Table 3 ###reference_###, and compared to the random initialization results in Table 2 ###reference_###, we observe a slight regression for smaller noise multipliers , whereas a significant improvement for the higher noise multipliers ."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2.2",
|
| 73 |
+
"parent_section_id": "4.2",
|
| 74 |
+
"section_name": "4.2.2 Per-Layer Clipping",
|
| 75 |
+
"text": "There are two commonly-used variants of per-layer clipping [24 ###reference_b24###, 18 ###reference_b18###], denoted by the uniform variant (which splits the clipping bound equally amongst all layers), and the dim variant (which splits the clipping bound proportional to each layer\u2019s dimension).\nWe conducted experiments using both the variants, and but found the dim variant to be outperforming the uniform one (similar to results seen in [18 ###reference_b18###]).\nWe present the results for adding per-layer clipping for DP pre-training, post public warmstarting, in the third column in Table 3 ###reference_###. While we observe the model diverging for the highest noise multiplier of , we notice significant improvements in model quality for all other considered values of noise multiplier, corroborating the observation in [18 ###reference_b18###] regarding the usefulness of per-layer clipping in the ASR domain."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.2.3",
|
| 79 |
+
"parent_section_id": "4.2",
|
| 80 |
+
"section_name": "4.2.3 Gradient-based layer freezing (LayerFreeze)",
|
| 81 |
+
"text": "For reducing the dimensionality of DP training, some recent works [40 ###reference_b40###, 34 ###reference_b34###] propose starting from a pruned model that is initialized from a publicly pre-trained checkpoint.\nIn this work, we devise a novel one-shot variant of model pruning called Gradient-based Layer Freezing (Algorithm 1 ###reference_###), where instead of removing or freezing individual parameters based on their magnitudes, we freeze them layer-wise based on the normalized squared norm of their gradients observed throughout the public warmstarting phase.\nAfter this operation, we continue DP pre-training with the pruned model and the entire LL dataset.\n###figure_3### Once the norms of the per-layer gradients until our public warmstarting checkpoint are accumulated, we focus on % of the model parameters, consisting of layers with the highest normalized accumulated squared gradient norm. We perform tuning experiments by freezing layers associated with either these parameter, or the remaining parameters.\n is treated as a hyperparameter, explored in the range as seen in Figure 3 ###reference_###.\nWe consistently find that DP pre-training benefits from freezing layers with the top parameters, where the best case is when .\nTo shed additional light on the layers frozen in our best-case scenario, layers corresponding to the bias, scale, beta, and gamma terms were frozen.\nWe report the results of using LayerFreeze, along with per-layer clipping and public warmstarting, in the fourth column in Table 3 ###reference_###.\nIt is important to note that LayerFreeze provides significant improvements in model quality in all the considered settings.\nIn summary, we obtain LibriSpeech WERs of 3.78/8.41 with (, )-DP for LibriLight with an extrapolation factor of 52 (low dataset scaling regime), and 2.81/5.89 with (, )-DP for LibriLight with an extrapolation factor of 530 (high dataset scaling regime)."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Conclusion",
|
| 87 |
+
"text": "We introduce DP to SSL for ASR, and a novel variant of model pruning called gradient-based layer freezing.\nOur technique improves the trade-offs for DP ASR pre-training, over improvements from public warmstarting and per-layer clipping.\nOverall, we demonstrate a DP training method that improves utility significantly while maintaining robust privacy guarantees under various extrapolation factors.\nThough our work provides a way to pre-train ASR encoders with strong DP guarantees, the extrapolations required to reach those guarantees can be limiting in some practical regimes.\nImproving computation trade-offs that we incur for reaching strong DP guarantees is an interesting direction we leave for future investigation."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Acknowledgments",
|
| 93 |
+
"text": "We would like to thank the following collaborators for supporting this work, offering valuable feedback & helping with quick prototyping of experiments: Lun Wang, Rajiv Mathews, Nanxin Chen, Brennan Saeta, Josh Lipschultz, Qiao Zhang, Colin Gaffney, Virat Shejwalkar and Hongbin Liu."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix 1",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix A Additional Extrapolation Analysis",
|
| 101 |
+
"text": "###figure_4### ###figure_5### For our extrapolation analysis for noise multiplier, batch size and our batch multiplier analysis, we consider 3 settings: 1) the original setting where only the batch multiplier is scaled up, 2) dataset scale up setting where batch multiplier is scaled up by the same factor as the dataset scale up, and 3) dataset scale up with dataset being 10x higher than the batch scale up. In all settings, we assume batch multipliers beyond 1000 to be too intractable to report.\nWe show the effects of settings 1 and 3 in the Figure 4 ###reference_###, whereas we see the effects of setting 2 in Figure 2 ###reference_###. We can see that the original setting only allows for the noise multiplier 0.1 where the model diverges in utility. However, the more extreme settings in Figures 2 ###reference_### and 4(b) ###reference_sf2### allow even noise multiplier of reach for batch multipliers 400. The ideal setting is with noise that allows for batch scale up (and corresponding dataset scale ups) for around 50x."
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"section_id": "Appendix 2",
|
| 105 |
+
"parent_section_id": null,
|
| 106 |
+
"section_name": "Appendix B Global and per-layer clip experiments with no noise",
|
| 107 |
+
"text": "Prior to adding in noise, we experimented with different clipping values, while noting the fraction of clipped gradients. For both global and per-layer clip values, we ensured that the clip values caused minimal loss of utility while clipping the maximum fraction of gradients. It is expected that higher clip values would arrive closer to the no-clip set up, but would not allow us to bind sensitivity for DP training due to the fewer fraction of gradients clipped. Therefore, we only selected both global and per-layer clip values below 5, which would clip most gradients and lead to minimal loss of utility. Based on the results in tables 4 ###reference_### and 5 ###reference_###, we selected clip value 1.5 for the global setting and 0.1 for the per layer clip setting with the dim variant.\n###table_1### ###table_2###"
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"section_id": "Appendix 3",
|
| 111 |
+
"parent_section_id": null,
|
| 112 |
+
"section_name": "Appendix C Results for tuning LayerFreeze with different percentage of parameters frozen",
|
| 113 |
+
"text": "In Figure 3 ###reference_###, we report intermediate results for freezing different percentage of parameters, ranging from freezing the top % to the remaining %. In table 6 ###reference_###, we report the exact test-other WER corresponding to the figure."
|
| 114 |
+
}
|
| 115 |
+
],
|
| 116 |
+
"tables": {
|
| 117 |
+
"1": {
|
| 118 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.18.4.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S3.T1.6.3\" style=\"font-size:90%;\">Extrapolation factor for linearly scaling-up noise multiplier, batch size and dataset size needed for each used noise multiplier value to get DP at , where is the scaled-up dataset size.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.16\" style=\"width:433.6pt;height:66.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(99.0pt,-15.1pt) scale(1.83975259104172,1.83975259104172) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.16.10\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.11.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T1.11.5.5.6\" style=\"padding-top:2pt;padding-bottom:2pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.11.5.5.6.1\">Noise multiplier</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.7.1.1.1\" style=\"padding-top:2pt;padding-bottom:2pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.8.2.2.2\" style=\"padding-top:2pt;padding-bottom:2pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.9.3.3.3\" style=\"padding-top:2pt;padding-bottom:2pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.10.4.4.4\" style=\"padding-top:2pt;padding-bottom:2pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.11.5.5.5\" style=\"padding-top:2pt;padding-bottom:2pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.16.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T1.16.10.10.6\" style=\"padding-top:2pt;padding-bottom:2pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.16.10.10.6.1\">Scale-up</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.12.6.6.1\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.13.7.7.2\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.14.8.8.3\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.15.9.9.4\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S3.T1.16.10.10.5\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 119 |
+
"capture": "Table 1: Extrapolation factor for linearly scaling-up noise multiplier, batch size and dataset size needed for each used noise multiplier value to get DP at , where is the scaled-up dataset size."
|
| 120 |
+
},
|
| 121 |
+
"2": {
|
| 122 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.16.2.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S4.T2.2.1\" style=\"font-size:90%;\">Noise tolerance of the BEST-RQ 300M model. Our no pre-train upper bound is WER of 4.43/11.23. Above noise multiplier , the model diverges into WER of 100.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.14\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.14.13.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.14.13.1.1\" style=\"padding-top:2pt;padding-bottom:2pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.14.13.1.1.1\">Noise multiplier</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.14.13.1.2\" style=\"padding-top:2pt;padding-bottom:2pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.14.13.1.2.1\">test-clean/other WER</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.1.1\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.2.2\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.3.1\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.4.2\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.5.1\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.6.2\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.9.7.1\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.8.2\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.12.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.11.9.1\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.12.10.2\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.14.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.13.11.1\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.14.12.2\" style=\"padding-top:2pt;padding-bottom:2pt;\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 123 |
+
"capture": "Table 2: Noise tolerance of the BEST-RQ 300M model. Our no pre-train upper bound is WER of 4.43/11.23. Above noise multiplier , the model diverges into WER of 100."
|
| 124 |
+
},
|
| 125 |
+
"3": {
|
| 126 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.18.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S4.T3.19.2\" style=\"font-size:90%;\">Final noise tolerance WERs for BEST-RQ 300M model with our considered improvements. If we observe divergence (mainly for higher noise multipliers), we report results on fine-tuning with an early 200k step pre-trained checkpoint instead.\n</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.16\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.16.17.1\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.16.17.1.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.16.17.1.1.1\">Noise</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.16.17.1.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.16.17.1.2.1\">Public WS</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.16.17.1.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.16.17.1.3.1\">+PerLayerClip</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.16.17.1.4\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.16.17.1.4.1\">+LayerFreeze</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4\">\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T3.1.1.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.2.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.3.3.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.4.4\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T3.4.4.4.1\">2.67/5.74</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8\">\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.5.5.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.6.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.7.7.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.8.8.4\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T3.8.8.4.1\">2.81/5.89</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.12\">\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T3.9.9.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.10.10.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.11.11.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.12.12.4\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T3.12.12.4.1\">3.19/7.17</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.16.16\">\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T3.13.13.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.14.14.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.15.15.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.16.16.4\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T3.16.16.4.1\">3.78/8.41</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 127 |
+
"capture": "Table 3: Final noise tolerance WERs for BEST-RQ 300M model with our considered improvements. If we observe divergence (mainly for higher noise multipliers), we report results on fine-tuning with an early 200k step pre-trained checkpoint instead.\n"
|
| 128 |
+
},
|
| 129 |
+
"4": {
|
| 130 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A2.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.18.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"A2.T4.19.2\" style=\"font-size:90%;\">Global Clip Result for BEST-RQ 300M model with Group Normalization.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A2.T4.16\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T4.16.17.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T4.16.17.1.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.16.17.1.1.1\">Clip Value</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T4.16.17.1.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.16.17.1.2.1\">test-other WER</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T4.1.1.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T4.2.2.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.3.3.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.4.4.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.5.5.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"A2.T4.5.5.1.1\">1.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.6.6.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"A2.T4.6.6.2.1\">4.29</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.7.7.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.8.8.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.9.9.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.10.10.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.11.11.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.12.12.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.14.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.13.13.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T4.14.14.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.16.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T4.15.15.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T4.16.16.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 131 |
+
"capture": "Table 4: Global Clip Result for BEST-RQ 300M model with Group Normalization."
|
| 132 |
+
},
|
| 133 |
+
"5": {
|
| 134 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A2.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T5.25.1.1\" style=\"font-size:90%;\">Table 5</span>: </span><span class=\"ltx_text\" id=\"A2.T5.26.2\" style=\"font-size:90%;\">Per Layer Clip Result for BEST-RQ 300M model with Group Normalization. Reporting best intermediate results among the <span class=\"ltx_text ltx_font_italic\" id=\"A2.T5.26.2.1\">dim</span> or <span class=\"ltx_text ltx_font_italic\" id=\"A2.T5.26.2.2\">uniform</span> variant. To save on compute, fine-tuning is done using an early pre-train checkpoint of 200k, assuming that the same conclusions hold for 1M. </span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A2.T5.21\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T5.21.22.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T5.21.22.1.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T5.21.22.1.1.1\">Clip Value</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A2.T5.21.22.1.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T5.21.22.1.2.1\">test-other WER</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A2.T5.21.22.1.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T5.21.22.1.3.1\">uniform/ dim</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T5.1.1.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T5.2.2.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A2.T5.2.2.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"A2.T5.2.2.3.1\">uniform</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.3.3.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.4.4.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T5.4.4.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"A2.T5.4.4.3.1\">dim</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.5.5.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.6.6.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T5.6.6.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"A2.T5.6.6.3.1\">dim</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.7.7.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"A2.T5.7.7.1.1\">0.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.8.8.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"A2.T5.8.8.2.1\">5.43</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T5.8.8.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"A2.T5.8.8.3.1\">dim</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.9.9.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.10.10.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T5.10.10.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"A2.T5.10.10.3.1\">uniform</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.11.11.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.12.12.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T5.12.12.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"A2.T5.12.12.3.1\">uniform</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.14.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.13.13.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.14.14.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T5.14.14.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"A2.T5.14.14.3.1\">uniform</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.16.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.15.15.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.16.16.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T5.16.16.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"A2.T5.16.16.3.1\">dim</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.18.18\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.17.17.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T5.18.18.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T5.18.18.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"A2.T5.18.18.3.1\">uniform</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T5.21.21\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T5.19.19.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T5.20.20.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A2.T5.21.21.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 135 |
+
"capture": "Table 5: Per Layer Clip Result for BEST-RQ 300M model with Group Normalization. Reporting best intermediate results among the dim or uniform variant. To save on compute, fine-tuning is done using an early pre-train checkpoint of 200k, assuming that the same conclusions hold for 1M. "
|
| 136 |
+
},
|
| 137 |
+
"6": {
|
| 138 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A3.T6\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T6.32.2.1\" style=\"font-size:90%;\">Table 6</span>: </span><span class=\"ltx_text\" id=\"A3.T6.2.1\" style=\"font-size:90%;\">Tuning our LayerFreeze with different percentage of parameters frozen, while keeping the DP noise multiplier constant at . To save on compute, fine-tuning is done using an early pre-train checkpoint of 200k, assuming that the same conclusions hold for 1M. </span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A3.T6.30\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A3.T6.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T6.3.1.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T6.5.3.4\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T6.5.3.4.1\">test-other WER</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T6.5.3.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T6.5.3.3.2\">Freeze or %</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A3.T6.6.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T6.6.4.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T6.6.4.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A3.T6.6.4.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"A3.T6.6.4.3.1\">No Freezing</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T6.9.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.7.5.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.8.6.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T6.9.7.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\">Freeze \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T6.12.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.10.8.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.11.9.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T6.12.10.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\">Freeze \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T6.15.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.13.11.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.14.12.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T6.15.13.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\">Freeze \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T6.18.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.16.14.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.17.15.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T6.18.16.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\">Freeze \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T6.21.19\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.19.17.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.20.18.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T6.21.19.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\">Freeze \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T6.24.22\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.22.20.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.23.21.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T6.24.22.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\">Freeze \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T6.27.25\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.25.23.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T6.26.24.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T6.27.25.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\">Freeze \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T6.30.28\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T6.28.26.1\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T6.29.27.2\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A3.T6.30.28.3\" style=\"padding-top:2.25pt;padding-bottom:2.25pt;\">Freeze \n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 139 |
+
"capture": "Table 6: Tuning our LayerFreeze with different percentage of parameters frozen, while keeping the DP noise multiplier constant at . To save on compute, fine-tuning is done using an early pre-train checkpoint of 200k, assuming that the same conclusions hold for 1M. "
|
| 140 |
+
}
|
| 141 |
+
},
|
| 142 |
+
"image_paths": {
|
| 143 |
+
"1": {
|
| 144 |
+
"figure_path": "2409.13953v1_figure_1.png",
|
| 145 |
+
"caption": "Fig. 1: The Differentially Private pre-training method for ASR encoder involving clipping per-example gradients from the minibatch, and addition of calibrated Gaussian noise. Gradients with norms below clip value are not clipped, as shown above. Once private pre-training of the ASR encoder is done, fine-tuning is done publicly after attaching an ASR decoder and using CTC loss [4, 15]",
|
| 146 |
+
"url": "http://arxiv.org/html/2409.13953v1/x1.png"
|
| 147 |
+
},
|
| 148 |
+
"2": {
|
| 149 |
+
"figure_path": "2409.13953v1_figure_2.png",
|
| 150 |
+
"caption": "Fig. 2: Extrapolating the noise multiplier linearly with batch size and dataset size to maintain the signal-to-noise ratio and improve privacy accounting.",
|
| 151 |
+
"url": "http://arxiv.org/html/2409.13953v1/extracted/5869447/figures/dataset-scale-up.png"
|
| 152 |
+
},
|
| 153 |
+
"3": {
|
| 154 |
+
"figure_path": "2409.13953v1_figure_3.png",
|
| 155 |
+
"caption": "Fig. 3: Performance from tuning our LayerFreeze with different percentage of parameters frozen, while keeping the DP noise multiplier constant at 1\u2062e\u2062-31e-31\\mathrm{e}\\scalebox{0.9}{-3}1 roman_e -3. Along the x-axis, we use p\ud835\udc5dpitalic_p to refer to the % of parameters consisting of layers with the highest accumulated gradient norms. We run experiments with freezing either the p\ud835\udc5dpitalic_p% parameters, or the remaining (1\u2212p)1\ud835\udc5d(1-p)( 1 - italic_p )%. To save on compute, fine-tuning is done using an early pre-train checkpoint of 200k, assuming that the same conclusions hold for 1M.",
|
| 156 |
+
"url": "http://arxiv.org/html/2409.13953v1/extracted/5869447/figures/dp-layerfreeze-ablations.png"
|
| 157 |
+
},
|
| 158 |
+
"4(a)": {
|
| 159 |
+
"figure_path": "2409.13953v1_figure_4(a).png",
|
| 160 |
+
"caption": "(a) Standard setting: Scaling up the noise multiplier linearly with batch size\nFig. 4: Most extreme setting: Scaling up the noise multiplier linearly with batch size and other independent parameters to maintain the signal to noise ratio. All other training dynamics remain unchanged with the assumption that the utility would remain the same.",
|
| 161 |
+
"url": "http://arxiv.org/html/2409.13953v1/extracted/5869447/figures/original-batch-multiplier.png"
|
| 162 |
+
},
|
| 163 |
+
"4(b)": {
|
| 164 |
+
"figure_path": "2409.13953v1_figure_4(b).png",
|
| 165 |
+
"caption": "(b) Scaling up the noise multiplier linearly with batch size and dataset size, with the dataset size having a headstart by 10x at each multiplier setting\nFig. 4: Most extreme setting: Scaling up the noise multiplier linearly with batch size and other independent parameters to maintain the signal to noise ratio. All other training dynamics remain unchanged with the assumption that the utility would remain the same.",
|
| 166 |
+
"url": "http://arxiv.org/html/2409.13953v1/extracted/5869447/figures/dataset-scale-up10x.png"
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
"validation": true,
|
| 170 |
+
"references": [
|
| 171 |
+
{
|
| 172 |
+
"1": {
|
| 173 |
+
"title": "\u201cAttention is all you need,\u201d",
|
| 174 |
+
"author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin,",
|
| 175 |
+
"venue": "Neural Information Processing Systems (NeurIPS), 2017.",
|
| 176 |
+
"url": null
|
| 177 |
+
}
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"2": {
|
| 181 |
+
"title": "\u201cTransformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss,\u201d",
|
| 182 |
+
"author": "Qian Zhang, Han Lu, Hasim Sak, Anshuman Tripathi, Erik McDermott, Stephen Koo, and Shankar Kumar,",
|
| 183 |
+
"venue": "in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020.",
|
| 184 |
+
"url": null
|
| 185 |
+
}
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"3": {
|
| 189 |
+
"title": "\u201cJasper: An End-to-End Convolutional Neural Acoustic Model,\u201d",
|
| 190 |
+
"author": "Jason Li, Vitaly Lavrukhin, Boris Ginsburg, Ryan Leary, Oleksii Kuchaiev, Jonathan M Cohen, Huyen Nguyen, and Ravi Teja Gadde,",
|
| 191 |
+
"venue": "in Interspeech, 2019.",
|
| 192 |
+
"url": null
|
| 193 |
+
}
|
| 194 |
+
},
|
| 195 |
+
{
|
| 196 |
+
"4": {
|
| 197 |
+
"title": "\u201cConformer: Convolution-augmented Transformer for Speech Recognition,\u201d",
|
| 198 |
+
"author": "Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang,",
|
| 199 |
+
"venue": "in Interspeech, 2020.",
|
| 200 |
+
"url": null
|
| 201 |
+
}
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"5": {
|
| 205 |
+
"title": "\u201cSelf-supervised learning with random-projection quantizer for speech recognition,\u201d",
|
| 206 |
+
"author": "Chung-Cheng Chiu, James Qin, Yu Zhang, Jiahui Yu, and Yonghui Wu,",
|
| 207 |
+
"venue": "in International Conference on Machine Learning, 2022.",
|
| 208 |
+
"url": null
|
| 209 |
+
}
|
| 210 |
+
},
|
| 211 |
+
{
|
| 212 |
+
"6": {
|
| 213 |
+
"title": "\u201cMembership inference attacks against machine learning models,\u201d",
|
| 214 |
+
"author": "Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov,",
|
| 215 |
+
"venue": "in 2017 IEEE symposium on security and privacy (SP). IEEE, 2017, pp. 3\u201318.",
|
| 216 |
+
"url": null
|
| 217 |
+
}
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"7": {
|
| 221 |
+
"title": "\u201cThe secret sharer: Evaluating and testing unintended memorization in neural networks,\u201d",
|
| 222 |
+
"author": "Nicholas Carlini, Chang Liu, \u00dalfar Erlingsson, Jernej Kos, and Dawn Song,",
|
| 223 |
+
"venue": "in 28th USENIX Security Symposium, 2019.",
|
| 224 |
+
"url": null
|
| 225 |
+
}
|
| 226 |
+
},
|
| 227 |
+
{
|
| 228 |
+
"8": {
|
| 229 |
+
"title": "\u201cExtracting training data from large language models,\u201d",
|
| 230 |
+
"author": "Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al.,",
|
| 231 |
+
"venue": "in 30th USENIX Security Symposium, 2021.",
|
| 232 |
+
"url": null
|
| 233 |
+
}
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"9": {
|
| 237 |
+
"title": "\u201cExtracting training data from diffusion models,\u201d",
|
| 238 |
+
"author": "Nicolas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramer, Borja Balle, Daphne Ippolito, and Eric Wallace,",
|
| 239 |
+
"venue": "in USENIX Security Symposium, 2023.",
|
| 240 |
+
"url": null
|
| 241 |
+
}
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"10": {
|
| 245 |
+
"title": "\u201cExtracting Targeted Training Data from ASR Models, and How to Mitigate It,\u201d",
|
| 246 |
+
"author": "Ehsan Amid, Om Dipakbhai Thakkar, Arun Narayanan, Rajiv Mathews, and Francoise Beaufays,",
|
| 247 |
+
"venue": "in Interspeech, 2022.",
|
| 248 |
+
"url": null
|
| 249 |
+
}
|
| 250 |
+
},
|
| 251 |
+
{
|
| 252 |
+
"11": {
|
| 253 |
+
"title": "\u201cMeasuring forgetting of memorized training examples,\u201d",
|
| 254 |
+
"author": "Matthew Jagielski, Om Thakkar, Florian Tram\u00e8r, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Guha Thakurta, Nicolas Papernot, and Chiyuan Zhang,",
|
| 255 |
+
"venue": "in The International Conference on Learning Representations (ICLR), 2023.",
|
| 256 |
+
"url": null
|
| 257 |
+
}
|
| 258 |
+
},
|
| 259 |
+
{
|
| 260 |
+
"12": {
|
| 261 |
+
"title": "\u201cUnintended memorization in large asr models, and how to mitigate it,\u201d",
|
| 262 |
+
"author": "Lun Wang, Om Thakkar, and Rajiv Mathews,",
|
| 263 |
+
"venue": "in ICASSP, 2024.",
|
| 264 |
+
"url": null
|
| 265 |
+
}
|
| 266 |
+
},
|
| 267 |
+
{
|
| 268 |
+
"13": {
|
| 269 |
+
"title": "\u201cNoise masking attacks and defenses for pretrained speech models,\u201d",
|
| 270 |
+
"author": "Matthew Jagielski, Om Thakkar, and Lun Wang,",
|
| 271 |
+
"venue": "in ICASSP, 2024.",
|
| 272 |
+
"url": null
|
| 273 |
+
}
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"14": {
|
| 277 |
+
"title": "\u201cQuantifying unintended memorization in best-rq asr encoders,\u201d",
|
| 278 |
+
"author": "Virat Shejwalkar, Om Thakkar, and Arun Narayanan,",
|
| 279 |
+
"venue": "in Interspeech 2024, 2024, pp. 2905\u20132909.",
|
| 280 |
+
"url": null
|
| 281 |
+
}
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"15": {
|
| 285 |
+
"title": "\u201cConnectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,\u201d",
|
| 286 |
+
"author": "Alex Graves, Santiago Fern\u00e1ndez, Faustino Gomez, and J\u00fcrgen Schmidhuber,",
|
| 287 |
+
"venue": "in The International Conference on Machine Learning (ICML), 2006.",
|
| 288 |
+
"url": null
|
| 289 |
+
}
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"16": {
|
| 293 |
+
"title": "\u201cCalibrating noise to sensitivity in private data analysis,\u201d",
|
| 294 |
+
"author": "Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith,",
|
| 295 |
+
"venue": "in Theory of Cryptography Conference (TCC), 2006.",
|
| 296 |
+
"url": null
|
| 297 |
+
}
|
| 298 |
+
},
|
| 299 |
+
{
|
| 300 |
+
"17": {
|
| 301 |
+
"title": "\u201cPractical and private (deep) learning without sampling or shuffling,\u201d",
|
| 302 |
+
"author": "Peter Kairouz, Brendan McMahan, Shuang Song, Om Thakkar, Abhradeep Thakurta, and Zheng Xu,",
|
| 303 |
+
"venue": "in ICML, 2021.",
|
| 304 |
+
"url": null
|
| 305 |
+
}
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"18": {
|
| 309 |
+
"title": "\u201cFederated learning with differential privacy for end-to-end speech recognition,\u201d",
|
| 310 |
+
"author": "Martin Pelikan, Sheikh Shams Azam, Vitaly Feldman, Jan Silovsky, Kunal Talwar, Tatiana Likhomanenko, et al.,",
|
| 311 |
+
"venue": "arXiv preprint arXiv:2310.00098, 2023.",
|
| 312 |
+
"url": null
|
| 313 |
+
}
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"19": {
|
| 317 |
+
"title": "\u201cDifferentially private fine-tuning of language models,\u201d",
|
| 318 |
+
"author": "Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, and Huishuai Zhang,",
|
| 319 |
+
"venue": "in ICLR, 2022.",
|
| 320 |
+
"url": null
|
| 321 |
+
}
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"20": {
|
| 325 |
+
"title": "\u201cDifferentially private bias-term only fine-tuning of foundation models,\u201d",
|
| 326 |
+
"author": "Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, and George Karypis,",
|
| 327 |
+
"venue": "arXiv preprint arXiv:2210.00036, 2022.",
|
| 328 |
+
"url": null
|
| 329 |
+
}
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"21": {
|
| 333 |
+
"title": "\u201cLoRA: Low-rank adaptation of large language models,\u201d",
|
| 334 |
+
"author": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen,",
|
| 335 |
+
"venue": "in ICLR, 2022.",
|
| 336 |
+
"url": null
|
| 337 |
+
}
|
| 338 |
+
},
|
| 339 |
+
{
|
| 340 |
+
"22": {
|
| 341 |
+
"title": "\u201cPrivate empirical risk minimization: Efficient algorithms and tight error bounds,\u201d",
|
| 342 |
+
"author": "Raef Bassily, Adam Smith, and Abhradeep Thakurta,",
|
| 343 |
+
"venue": "in Annual Symposium on Foundations of Computer Science, 2014.",
|
| 344 |
+
"url": null
|
| 345 |
+
}
|
| 346 |
+
},
|
| 347 |
+
{
|
| 348 |
+
"23": {
|
| 349 |
+
"title": "\u201cTraining text-to-text transformers with privacy guarantees,\u201d",
|
| 350 |
+
"author": "Natalia Ponomareva, Jasmijn Bastings, and Sergei Vassilvitskii,",
|
| 351 |
+
"venue": "in Findings of the Association for Computational Linguistics (ACL), 2022.",
|
| 352 |
+
"url": null
|
| 353 |
+
}
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"24": {
|
| 357 |
+
"title": "\u201cLearning differentially private recurrent language models,\u201d",
|
| 358 |
+
"author": "H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang,",
|
| 359 |
+
"venue": "in ICLR, 2018.",
|
| 360 |
+
"url": null
|
| 361 |
+
}
|
| 362 |
+
},
|
| 363 |
+
{
|
| 364 |
+
"25": {
|
| 365 |
+
"title": "\u201cThe algorithmic foundations of differential privacy,\u201d",
|
| 366 |
+
"author": "Cynthia Dwork, Aaron Roth, et al.,",
|
| 367 |
+
"venue": "Foundations and Trends\u00ae in Theoretical Computer Science, vol. 9, no. 3\u20134, pp. 211\u2013407, 2014.",
|
| 368 |
+
"url": null
|
| 369 |
+
}
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"26": {
|
| 373 |
+
"title": "\u201cHow to dp-fy ml: A practical guide to machine learning with differential privacy,\u201d",
|
| 374 |
+
"author": "Natalia Ponomareva, Hussein Hazimeh, Alex Kurakin, Zheng Xu, Carson Denison, H Brendan McMahan, Sergei Vassilvitskii, Steve Chien, and Abhradeep Guha Thakurta,",
|
| 375 |
+
"venue": "Journal of Artificial Intelligence Research, 2023.",
|
| 376 |
+
"url": null
|
| 377 |
+
}
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"27": {
|
| 381 |
+
"title": "\u201cDeep learning with differential privacy,\u201d",
|
| 382 |
+
"author": "Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang,",
|
| 383 |
+
"venue": "in The SIGSAC conference on computer and communications security, 2016.",
|
| 384 |
+
"url": null
|
| 385 |
+
}
|
| 386 |
+
},
|
| 387 |
+
{
|
| 388 |
+
"28": {
|
| 389 |
+
"title": "\u201cEnabling fast differentially private sgd via just-in-time compilation and vectorization,\u201d",
|
| 390 |
+
"author": "Pranav Subramani, Nicholas Vadivelu, and Gautam Kamath,",
|
| 391 |
+
"venue": "NeurIPS, 2021.",
|
| 392 |
+
"url": null
|
| 393 |
+
}
|
| 394 |
+
},
|
| 395 |
+
{
|
| 396 |
+
"29": {
|
| 397 |
+
"title": "\u201cWhen does differentially private learning not suffer in high dimensions?,\u201d",
|
| 398 |
+
"author": "Xuechen Li, Daogao Liu, Tatsunori B Hashimoto, Huseyin A Inan, Janardhan Kulkarni, Yin-Tat Lee, and Abhradeep Guha Thakurta,",
|
| 399 |
+
"venue": "NeurIPS, 2022.",
|
| 400 |
+
"url": null
|
| 401 |
+
}
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"30": {
|
| 405 |
+
"title": "\u201cLarge-scale differentially private BERT,\u201d",
|
| 406 |
+
"author": "Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, and Pasin Manurangsi,",
|
| 407 |
+
"venue": "in Findings of the Association for Computational Linguistics: EMNLP, 2022.",
|
| 408 |
+
"url": null
|
| 409 |
+
}
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"31": {
|
| 413 |
+
"title": "\u201cToward training at imagenet scale with differential privacy,\u201d",
|
| 414 |
+
"author": "Alexey Kurakin, Shuang Song, Steve Chien, Roxana Geambasu, Andreas Terzis, and Abhradeep Thakurta,",
|
| 415 |
+
"venue": "arXiv preprint arXiv:2201.12328, 2022.",
|
| 416 |
+
"url": null
|
| 417 |
+
}
|
| 418 |
+
},
|
| 419 |
+
{
|
| 420 |
+
"32": {
|
| 421 |
+
"title": "\u201cUnlocking high-accuracy differentially private image classification through scale,\u201d",
|
| 422 |
+
"author": "Soham De, Leonard Berrada, Jamie Hayes, Samuel L Smith, and Borja Balle,",
|
| 423 |
+
"venue": "arXiv preprint arXiv:2204.13650, 2022.",
|
| 424 |
+
"url": null
|
| 425 |
+
}
|
| 426 |
+
},
|
| 427 |
+
{
|
| 428 |
+
"33": {
|
| 429 |
+
"title": "\u201cLarge language models can be strong differentially private learners,\u201d",
|
| 430 |
+
"author": "Xuechen Li, Florian Tramer, Percy Liang, and Tatsunori Hashimoto,",
|
| 431 |
+
"venue": "arXiv preprint arXiv:2110.05679, 2021.",
|
| 432 |
+
"url": null
|
| 433 |
+
}
|
| 434 |
+
},
|
| 435 |
+
{
|
| 436 |
+
"34": {
|
| 437 |
+
"title": "\u201cInference and interference: The role of clipping, pruning and loss landscapes in differentially private stochastic gradient descent,\u201d",
|
| 438 |
+
"author": "Lauren Watson, Eric Gan, Mohan Dantam, Baharan Mirzasoleiman, and Rik Sarkar,",
|
| 439 |
+
"venue": "arXiv preprint arXiv:2311.06839, 2023.",
|
| 440 |
+
"url": null
|
| 441 |
+
}
|
| 442 |
+
},
|
| 443 |
+
{
|
| 444 |
+
"35": {
|
| 445 |
+
"title": "\u201cBigssl: Exploring the frontier of large-scale semi-supervised learning for automatic speech recognition,\u201d",
|
| 446 |
+
"author": "Yu Zhang, Daniel S Park, Wei Han, James Qin, Anmol Gulati, Joel Shor, Aren Jansen, Yuanzhong Xu, Yanping Huang, Shibo Wang, et al.,",
|
| 447 |
+
"venue": "IEEE Journal of Selected Topics in Signal Processing, 2022.",
|
| 448 |
+
"url": null
|
| 449 |
+
}
|
| 450 |
+
},
|
| 451 |
+
{
|
| 452 |
+
"36": {
|
| 453 |
+
"title": "\u201cLibri-light: A benchmark for asr with limited or no supervision,\u201d",
|
| 454 |
+
"author": "Jacob Kahn, Morgane Rivi\u00e8re, Weiyi Zheng, Evgeny Kharitonov, Qiantong Xu, Pierre-Emmanuel Mazar\u00e9, Julien Karadayi, Vitaliy Liptchinsky, Ronan Collobert, Christian Fuegen, et al.,",
|
| 455 |
+
"venue": "in ICASSP, 2020.",
|
| 456 |
+
"url": null
|
| 457 |
+
}
|
| 458 |
+
},
|
| 459 |
+
{
|
| 460 |
+
"37": {
|
| 461 |
+
"title": "\u201cLibrispeech: An asr corpus based on public domain audio books,\u201d",
|
| 462 |
+
"author": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur,",
|
| 463 |
+
"venue": "in ICASSP, 2015.",
|
| 464 |
+
"url": null
|
| 465 |
+
}
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"38": {
|
| 469 |
+
"title": "\u201cR\u00e9nyi differential privacy of the sampled gaussian mechanism,\u201d",
|
| 470 |
+
"author": "Ilya Mironov, Kunal Talwar, and Li Zhang,",
|
| 471 |
+
"venue": "arXiv preprint arXiv:1908.10530, 2019.",
|
| 472 |
+
"url": null
|
| 473 |
+
}
|
| 474 |
+
},
|
| 475 |
+
{
|
| 476 |
+
"39": {
|
| 477 |
+
"title": "\u201cWhy is public pretraining necessary for private model training?,\u201d",
|
| 478 |
+
"author": "Arun Ganesh, Mahdi Haghifam, Milad Nasr, Sewoong Oh, Thomas Steinke, Om Thakkar, Abhradeep Guha Thakurta, and Lun Wang,",
|
| 479 |
+
"venue": "in ICML, 2023.",
|
| 480 |
+
"url": null
|
| 481 |
+
}
|
| 482 |
+
},
|
| 483 |
+
{
|
| 484 |
+
"40": {
|
| 485 |
+
"title": "\u201cScalable differential privacy with sparse network finetuning,\u201d",
|
| 486 |
+
"author": "Zelun Luo, Daniel J Wu, Ehsan Adeli, and Li Fei-Fei,",
|
| 487 |
+
"venue": "in The IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.",
|
| 488 |
+
"url": null
|
| 489 |
+
}
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"41": {
|
| 493 |
+
"title": "\u201cPublic data-assisted mirror descent for private model training,\u201d",
|
| 494 |
+
"author": "Ehsan Amid, Arun Ganesh, Rajiv Mathews, Swaroop Ramaswamy, Shuang Song, Thomas Steinke, Vinith M. Suriyakumar, Om Thakkar, and Abhradeep Thakurta,",
|
| 495 |
+
"venue": "in ICML, 2022.",
|
| 496 |
+
"url": null
|
| 497 |
+
}
|
| 498 |
+
}
|
| 499 |
+
],
|
| 500 |
+
"url": "http://arxiv.org/html/2409.13953v1"
|
| 501 |
+
}
|
20240921/2409.13972v1.json
ADDED
|
@@ -0,0 +1,416 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Can Language Model Understand Word Semantics as A Chatbot? An Empirical Study of Language Model Internal External Mismatch",
|
| 3 |
+
"abstract": "Current common interactions with language models is through full inference. This approach may not necessarily align with the model\u2019s internal knowledge. Studies show discrepancies between prompts and internal representations. Most focus on sentences understanding. We study the discrepancy of word semantics understanding in internal and external mismatch across Encoder-only, Decoder-only, and Encoder-Decoder pre-trained language models.000\u2217Equal contribution",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Language models (LMs) (Devlin et al., 2019a ###reference_b5###; Radford et al., 2019 ###reference_b23###; Wang and Komatsuzaki, 2021 ###reference_b28###; Brown et al., 2020 ###reference_b2###) have drawn a wide range of interest in many fields.\nThe ability to process natural language, encode data into parameters, and generate convincing paragraphs drives many people to consider it as trusted knowledge source.\nLM\u2019s truthfulness is then a key factor in determining if they are suitable for many downstream applications;\nin other words, researchers need to assess LMs integrity in their claims.\nMachine honesty is very important in recent LLM research. Honesty intersects with aspects such as truthfulness (Evans et al., 2021 ###reference_b7###), calibration (Guo et al., 2017 ###reference_b10###; Minderer et al., 2021 ###reference_b19###; Mielke et al., 2022 ###reference_b17###), self-knowledge (Yin et al., 2023 ###reference_b34###; Kadavath et al., 2022 ###reference_b12###), non-deceptiveness (Azaria and Mitchell, 2023 ###reference_b1###) and so on.\nThere are works investigating whether AI models are aware of what they are expressing.\nThe comprehensive analysis on the honesty of LLMs by Kadavath et al. (2022 ###reference_b12###) concludes that LLMs are well-calibrated. Cheng et al. (2024 ###reference_b3###) has similar conclusions regarding models\u2019 awareness and understanding of what they know and what they do not know. Other works also demonstrated quirky behaviors and phenomena associated with how the model respond to prompt (Khashabi et al., 2022 ###reference_b13###; Webson et al., 2023 ###reference_b31###).\nPrior works keep demonstrate that there is a discrepancy between internal and external representations. Hu and Levy (2023 ###reference_b11###) explored the discrepancies between the model\u2019s internal next token distribution and the distribution obtained using prompts such as \"What is the best next word?\".\n Liu et al. (2023 ###reference_b14###) analyzed the internal and external inconsistencies of the model from the perspectives of probing(internal) and querying(external). Azaria and Mitchell (2023 ###reference_b1###) investigated how to use the internal state to determine the truthfulness of text generated by language models, thereby also confirming inconsistencies between the model\u2019s internal and external outputs.\nIn this work, external output refers to the results produced by LMs, specifically the distributions over special positional tokens\n(e.g. [MASK] token in Encoder-based LMs, next token in Decoder-based LMs).\nResearches show that there are information stores in the internal hidden representation. We use hidden representation as the internal information (Wang et al., 2023b ###reference_b30###).\nELMo (Peters et al., 2018 ###reference_b21###) is the first to introduce the concept of contextual embeddings by adapting embeddings to word usage in context. Before that word embeddings are static Mikolov et al. (2013 ###reference_b18###); Pennington et al. (2014 ###reference_b20###).\nBERT (Devlin et al., 2019a ###reference_b5###) utilizes transformer architecture to capture deep contextual nuances, setting new standards for various tasks.\nWord embeddings represent the contextual meaning of a word using high-dimensional vectors. In this work, we employed probes and queries to compare language models across three commonly used word embedding evaluation benchmarks.\nPrevious research by Liu et al. (2023 ###reference_b14###) found no significant difference between queries and probes in question-answering tasks, which primarily focus on sentence-level meaning extraction.\nHowever, our results diverge markedly from these findings; we observed a substantial gap between probes and queries, highlighting potential limitations of queries in capturing word-level semantics."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Method",
|
| 15 |
+
"text": "To investigate LM\u2019s understanding on word semantics,\nwe mainly focus on 3 distinct tasks spanning the spectrum of LM training streams;\nnamely word similarity, structured prediction, and Analogy.\nFirst, we introduce the benchmark, followed by the strategy of probing and querying.\nWe employ the linear probing, which are commonly used in recent NLP works (Liu et al., 2023 ###reference_b14###; Marks and Tegmark, 2024 ###reference_b16###). Compared to the finetuning process, linear probing takes only thousands of parameters which is significantly smaller than the LMs itself with millions to billions of parameters."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Word Similarity",
|
| 21 |
+
"text": "Word similarity tasks (Finkelstein et al., 2001 ###reference_b8###; Luong et al., 2013 ###reference_b15###) are used to test semantic similarities between words. We use WiC (Pilehvar and Camacho-Collados, 2019 ###reference_b22###) to test the similarity of contextual embedding. WiC contains 5428 test data and 1400 training data. Each data contains a pair of sentences that both contain the target word, and the golden is to answer whether the target word in two sentences has the same meaning contextually.\nLet be the tokens that construct the target sentence. be the hidden vector of target word tokens in the first sentence. We use the average vector to represent the target word in the first sentence.\nSimilarly, we use to represent the target word in the other sentence. We adopt the classification objective function in Reimers and Gurevych (2019 ###reference_b25###) that takes as input and build a 2-class logistic regression on top:\nWe use the queries that are commonly used in other work Wei et al. (2022 ###reference_b32###). For example:\n{Sentence1}\n{Sentence2}\nDoes the word \"{word}\" mean the same thing in the above two sentences?\nAnswer:[MASK]\nThe prompts we used are listed in Appendix A ###reference_###. We report the accuracy with the highest accuracy. For generative LMs, we will ask LMs to generate [MASK] position tokens.\nAfter the inference, we extract the result logits and compare the probability of the expected output token;\nfor example, Bert is expected to output token \u2019Yes\u2019 or \u2019No\u2019, and then a normalized probability is computed."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Structured Prediction",
|
| 27 |
+
"text": "Named Entity Recognition (NER) (Tjong Kim Sang and De Meulder, 2003 ###reference_b26###; Derczynski et al., 2017 ###reference_b4###) task is to identify and classify entities (like names of persons, organizations, locations and etc.) in a given text. NER is also used to evaluate word embeddings (Pennington et al., 2014 ###reference_b20###). In this work, we use CoNLL2003 (Tjong Kim Sang and De Meulder, 2003 ###reference_b26###) which contains 46,435 tokens in the test set. CoNLL2003 has four entities: person, location, miscellaneous and organization. Detailed statistics are listed in Appendix C ###reference_###.\nSimilarly, we use to be the average hidden vectors of all tokens in the word. We then build a 5-class logistic regression:\nAfter comparing the accuracy of many prompts, we adopt the following:\n{Sentence}. The word {word} in the previous sentence is labelled as [MASK]\nWe compare the probability of \u201clocation\u201d, \u201cperson\u201d, \u201corganization\u201d, \u201cmiscellaneous\u201d and select the one with the highest score as the output."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Analogy",
|
| 33 |
+
"text": "BATS (Gladkova et al., ###reference_b9###) is an analogy dataset containing 199 validation data and 1799 test data. BATS is commonly used to evaluate the quality of word embeddings by testing their ability to capture semantic and syntactic relationships between words. This benchmark contains multiple-choice questions that give stem words a and b and ask to choose the best pair of words from 4 choices that best fit \" a is to b as c is to d?\".\nFor example, given the stem pairs (\"einstein\", \"physicist\") and 4 choices pairs (\"bee\", \"larva\"), (\"schwarzenegger\", \"napoleon\"), (\"pascal\", \"mathematician\"), (\"locke\", \"Confucius\"), apparently the pair (\"pascal\", \"mathematician\") should be chosen since it has the closest relation as the stem pair.\nWe first use GPT-4 to generate 5 sentences for each word in the BATS. Then compute hidden vectors of each word of each sentence. Then average 5 word vectors to be the vector representation of each word. For the probe, each data has three negative samples and one positive sample, which makes the training data unbalanced. We follow (Ushio et al., 2021 ###reference_b27###), for gold analogies, we put both (a, b)-(c, d) and (a, c) - (b, d) as positive samples. This would increase the size of the positive samples. Let be the vector representation of word and so on. For the analogy question, the distance from b to a should be similar to the distance from d and c. Therefore we also inherit classification objection Reimers and Gurevych (2019 ###reference_b25###).\nDuring the evaluation step, the pair with the highest positive probability will be chosen.\nWe select the following prompt:\n{} is to {} as:\nA) {} is to {}\nB) {} is to {}\nC) {} is to {}\nD) {} is to {}\nAnswer:[MASK]\nOther prompts are listed in Appendix B ###reference_###."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Results",
|
| 39 |
+
"text": ""
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Main Results",
|
| 45 |
+
"text": "Table 1 ###reference_### shows the accuracy achieved by representative models in the target benchmark. We found noticeable differences between probe and query in terms of word semantic capturing. This gap is evident across all models and all benchmarks, highlighting that pretrained language models, when used as chatbots, can exhibit information discrepancies compared to the knowledge stored within their internal neurons.\nIn WiC benchmark, the answer to the prompt question is binary (yes-no question);\nwe observe that all models are query accuracy is within the range of 49% to 53%, close to random guess (50%).\nProbe accuracy is considerably higher with a highest 65% chance to correctly understand context-sentence word semantics.\nAs aforementioned, because probing performs linear classification directly on the word embedding, the higher accuracy above random guess indicate that the internal representation is indeed capable to distinguish the word similarity; however, this knowledge failed to propagate to the model output.\nF1 score is a common indicator for NER tasks;\nwe observed a more pronounced internal-external discrepancy.\nBecause models with encoder have a better understanding of the input words, they outperform decoder-only models.\nFor instance, BERT embeddings for probing achieved state-of-the-art performance with an F1 score of 96%.\nGPT-2, on the other hand, has a much lower F1 score, conforming to the observation made by Wang et al. (2023a ###reference_b29###) and Xie et al. (2023 ###reference_b33###), where GPT3/ChatGPT in both fine-tune and zero-shot setting is less performant than BERT.\nIn contrast, the performance of queries was even lower than random guessing.\nGiven that the prompt in Analogy benchmark is a multiple choice question with four options,\nBERT models exhibits a nearly random guess accuracy around 25% in query, while the probe accuracy almost doubles.\nThe query accuracy of GPT and T5 models direct some of their understanding to the output, reaching around 30%.\nGPT-2 has the lowest probe accuracy at 41%; it may reflect that decoder-based models are more suitable for text generation and less performant in extracting the meaning of words."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Instruct Tuning and Finetuning",
|
| 51 |
+
"text": "When there is a mismatch between internal and external representation,\nit may indicate an alignment issue;\nthe knowledge of the model is not properly propagated to the very end.\nWe then investigate if finetuning improves the misalignment issue.\nFlan T5 is a instruction-finetune model based on T5 in a mixture of tasks Raffel et al. (2023 ###reference_b24###); Wei et al. (2022 ###reference_b32###); specifically, WiC is explicitly used as one of the datasets.\nAs shown in Table 2 ###reference_###,\nFlan T5 outperforms the T5 in terms of query accuracy, proving that finetuning indeed enhances model\u2019s ability to direct the knowledge to the output.\nA similar observation can be found in Liu et al. (2023 ###reference_b14###),\nwhere the authors finetune GPT2-XL on true question/answer pairs.\nHowever, although the accuracy is boosted from 50% to 59%, probing still shows a better performance.\nThe model seems to have a similar understanding of word semantics in both models, and thus Flan T5 slightly improves probe accuracy from 65% to 68% compared to T5."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.3",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Calibration",
|
| 57 |
+
"text": "A well-calibrated model should exhibit close alignment between confidence and accuracy. We demonstrate the confidence and accuracy of three models on the WIC task in Figure 1 ###reference_###; probe are better calibrated than queries. Furthermore, model with better WiC performance like BERT and T5 has the best calibration than GPT-2.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Conclusion",
|
| 63 |
+
"text": "In this paper, we studied the discrepancy between language model\u2019s internal and external representations. We mainly focus on the ability to understand the word semantics.\nProbe consistently shows a better performance than query, indicating that there is potential to improve models truthfulness. Currently, the model knowledge is not properly reflected on the model\u2019s generated output. We find that finetuning or calibration help to improve the accuracy to some extend, but it still not on par to probe accuracy. Other factors like model size also contribute to the discrepancy. Improving the model\u2019s truthfulness will unleash their potential in applications where reliability and robustness are preferable."
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [
|
| 67 |
+
{
|
| 68 |
+
"section_id": "Appendix 1",
|
| 69 |
+
"parent_section_id": null,
|
| 70 |
+
"section_name": "Appendix A WiC Prompt",
|
| 71 |
+
"text": "See Table 3 ###reference_### for the list of prompts we use in WIC evaluation."
|
| 72 |
+
},
|
| 73 |
+
{
|
| 74 |
+
"section_id": "Appendix 2",
|
| 75 |
+
"parent_section_id": null,
|
| 76 |
+
"section_name": "Appendix B Analogy Question Prompts",
|
| 77 |
+
"text": "See Table 4 ###reference_### for the prompts we use for analogy question."
|
| 78 |
+
},
|
| 79 |
+
{
|
| 80 |
+
"section_id": "Appendix 3",
|
| 81 |
+
"parent_section_id": null,
|
| 82 |
+
"section_name": "Appendix C CONLL2003 Statistics",
|
| 83 |
+
"text": "See Table 5 ###reference_### for CoNLL2003 statistics."
|
| 84 |
+
}
|
| 85 |
+
],
|
| 86 |
+
"tables": {
|
| 87 |
+
"1": {
|
| 88 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S3.T1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.1.1.1.1.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.1.1.1.2.1\">method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.3\">WiC</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S3.T1.1.1.1.4\">NER</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.1.1.5\">Analogy</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.1\">Acc(%)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.2.2.2\">Precision</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.2.2.3\">Recall</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.1.2.2.4\">F1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.1.2.2.5\">Acc(%)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.1.3.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.1.3.1.1.1\">BERT-base</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.1.2\">Query</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.1.3\">50</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.3.1.4\">7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.3.1.5\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.3.1.6\">14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.3.1.7\">25</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.4.2.1\">Probe</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.4.2.2\">65</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.2.3\">95</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.2.4\">96</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.4.2.5\">96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.4.2.6\">51</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.1.5.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.1.5.3.1.1\">BERT-large</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.3.2\">Query</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.3.3\">53</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.5.3.4\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.5.3.5\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.5.3.6\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.5.3.7\">26</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.6.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.6.4.1\">Probe</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.6.4.2\">65</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.6.4.3\">96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.6.4.4\">95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.6.4.5\">96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.6.4.6\">48</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.1.7.5.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.1.7.5.1.1\">GPT-2</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.7.5.2\">Query</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.7.5.3\">49</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.7.5.4\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.7.5.5\">42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.7.5.6\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.7.5.7\">33</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.8.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.8.6.1\">Probe</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.8.6.2\">58</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.8.6.3\">97</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.8.6.4\">32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.8.6.5\">48</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.8.6.6\">41</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.9.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.1.9.7.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.1.9.7.1.1\">T5-small</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.9.7.2\">Query</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.9.7.3\">49</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.9.7.4\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.9.7.5\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.9.7.6\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.9.7.7\">31</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.10.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.10.8.1\">Probe</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.1.10.8.2\">61</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.10.8.3\">98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.10.8.4\">94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.1.10.8.5\">96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.1.10.8.6\">47</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.11.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S3.T1.1.11.9.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.1.11.9.1.1\">T5-large</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.11.9.2\">Query</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.11.9.3\">50</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.11.9.4\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.11.9.5\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.1.11.9.6\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.11.9.7\">35</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.12.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S3.T1.1.12.10.1\">Probe</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S3.T1.1.12.10.2\">65</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.1.12.10.3\">99</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.1.12.10.4\">96</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T1.1.12.10.5\">97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T1.1.12.10.6\">48</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Accuracy of encoder, decoder, and encoder-decoder models on benchmark WIC, NER, and Analogy.</figcaption>\n</figure>",
|
| 89 |
+
"capture": "Table 1: Accuracy of encoder, decoder, and encoder-decoder models on benchmark WIC, NER, and Analogy."
|
| 90 |
+
},
|
| 91 |
+
"2": {
|
| 92 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.1\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1.2\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.1.1.1.3\">WiC</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.2.1.1\">T5-large</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.2.1.2\">Query</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.2.1.3\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.3.2\">\n<td class=\"ltx_td ltx_border_r\" id=\"S3.T2.1.3.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.1.3.2.2\">Probe</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.1.3.2.3\">65</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.4.3.1\">Flan-T5-large</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.1.4.3.2\">Query</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.4.3.3\">59</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.1.5.4\">\n<td class=\"ltx_td ltx_border_b ltx_border_r\" id=\"S3.T2.1.5.4.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T2.1.5.4.2\">Probe</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.1.5.4.3\">68</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Accuracy of T5 and Flan-T5.</figcaption>\n</figure>",
|
| 93 |
+
"capture": "Table 2: Accuracy of T5 and Flan-T5."
|
| 94 |
+
},
|
| 95 |
+
"3": {
|
| 96 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A1.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A1.T3.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T3.1.1.1.1.1\">Prompt</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T3.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.2.1.1\">{sentence1}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.3.2.1\">{sentence2}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.4.3.1\">Does the word \"{word}\" mean the same thing in the above two sentences?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.5.4.1\">Answer:[MASK]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.6.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.6.5.1\">Sentence 1: {sentence1}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.7.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.7.6.1\">Sentence 2: {sentence2}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.8.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.8.7.1\">Does {word} mean the same thing in these two sentences?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.9.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.9.8.1\">Answer:[MASK]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.10.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.10.9.1\">Here is one sentence: {sentence1}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.11.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.11.10.1\">Here is another sentence: {sentence2}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.12.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.12.11.1\">Does the term {word} mean the same thing in both these sentences?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.13.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.13.12.1\">Answer:[MASK]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.14.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.14.13.1\">In these two sentences (1) {sentence1} (2) {sentence2},</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.15.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.15.14.1\">does the word {word} mean the same thing?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.16.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.16.15.1\">Answer:[MASK]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.17.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.17.16.1\">Does the word \"{word}\" have the same meaning in the following two sentences?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.18.17\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.18.17.1\">{sentence1}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.19.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.19.18.1\">{sentence2}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.20.19\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.20.19.1\">Answer:[MASK]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.21.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.21.20.1\">Is the word \"{word}\" used in the same way in the following two sentences?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.22.21\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.22.21.1\">{sentence1}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.23.22\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.23.22.1\">{sentence2}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.24.23\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.24.23.1\">Answer:[MASK]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.25.24\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.25.24.1\">Does the word \"{word}\" have the same definition in the next two sentences?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.26.25\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.26.25.1\">{sentence1}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.27.26\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.27.26.1\">{sentence2}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.28.27\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.28.27.1\">Answer:[MASK]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.29.28\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.29.28.1\">Is {word} used to mean the same thing in the next two sentences?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.30.29\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.30.29.1\">{sentence1}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.31.30\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.31.30.1\">{sentence2}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.32.31\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.32.31.1\">Answer:[MASK]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.33.32\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.33.32.1\">Does \"{word}\" mean the same thing in these two sentences?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.34.33\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.34.33.1\">{sentence1}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.35.34\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.35.34.1\">{sentence2}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.36.35\">\n<td class=\"ltx_td ltx_align_left\" id=\"A1.T3.1.36.35.1\">Answer:[MASK]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.37.36\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A1.T3.1.37.36.1\">Does the word \"{word}\" mean the same thing in \"{sentence1}\" and \"{sentence2}\"?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T3.1.38.37\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"A1.T3.1.38.37.1\">Answer:[MASK]</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Prompts for WIC.</figcaption>\n</figure>",
|
| 97 |
+
"capture": "Table 3: Prompts for WIC."
|
| 98 |
+
},
|
| 99 |
+
"4": {
|
| 100 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A2.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A2.T4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A2.T4.4.5.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"A2.T4.4.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T4.4.5.1.1.1\">Prompt</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T4.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A2.T4.2.2.2\">{} is to {} as:</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.6.1\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T4.4.6.1.1\">A) {} is to {}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.7.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T4.4.7.2.1\">B) {} is to {}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.8.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T4.4.8.3.1\">C) {} is to {}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.9.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T4.4.9.4.1\">D) {} is to {}</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.10.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T4.4.10.5.1\">Answer:[MASK]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A2.T4.4.4.2\">Which of the following pairs has the most similar relation with {, }?</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.11.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T4.4.11.6.1\">A) {, }</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.12.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T4.4.12.7.1\">B) {, }</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.13.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T4.4.13.8.1\">C) {, }</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.14.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T4.4.14.9.1\">D) {, }</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T4.4.15.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"A2.T4.4.15.10.1\">Answer:[MASK]</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Prompts for Analogy question.</figcaption>\n</figure>",
|
| 101 |
+
"capture": "Table 4: Prompts for Analogy question."
|
| 102 |
+
},
|
| 103 |
+
"5": {
|
| 104 |
+
"table_html": "<figure class=\"ltx_table\" id=\"A3.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A3.T5.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A3.T5.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"A3.T5.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T5.1.1.1.1.1\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"A3.T5.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T5.1.1.1.2.1\">Sentences</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"A3.T5.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T5.1.1.1.3.1\">Tokens</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"A3.T5.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T5.1.1.1.4.1\">Entities</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A3.T5.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A3.T5.1.2.1.1\">Train</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A3.T5.1.2.1.2\">14,041</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A3.T5.1.2.1.3\">203,621</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A3.T5.1.2.1.4\">23,499</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T5.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"A3.T5.1.3.2.1\">Dev</th>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T5.1.3.2.2\">3,250</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T5.1.3.2.3\">51,362</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A3.T5.1.3.2.4\">5,942</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T5.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"A3.T5.1.4.3.1\">Test</th>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"A3.T5.1.4.3.2\">3,453</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"A3.T5.1.4.3.3\">46,435</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"A3.T5.1.4.3.4\">5,648</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>CoNLL2003 Statistics.</figcaption>\n</figure>",
|
| 105 |
+
"capture": "Table 5: CoNLL2003 Statistics."
|
| 106 |
+
}
|
| 107 |
+
},
|
| 108 |
+
"image_paths": {
|
| 109 |
+
"1(a)": {
|
| 110 |
+
"figure_path": "2409.13972v1_figure_1(a).png",
|
| 111 |
+
"caption": "Figure 1: Model confidence and Accuracy comparison on WiC datasets.",
|
| 112 |
+
"url": "http://arxiv.org/html/2409.13972v1/extracted/5869498/tbl/embedding/bert-base-cased_wic_probe_test.png"
|
| 113 |
+
},
|
| 114 |
+
"1(b)": {
|
| 115 |
+
"figure_path": "2409.13972v1_figure_1(b).png",
|
| 116 |
+
"caption": "Figure 1: Model confidence and Accuracy comparison on WiC datasets.",
|
| 117 |
+
"url": "http://arxiv.org/html/2409.13972v1/extracted/5869498/tbl/embedding/gpt2_wic_probe_test.png"
|
| 118 |
+
},
|
| 119 |
+
"1(c)": {
|
| 120 |
+
"figure_path": "2409.13972v1_figure_1(c).png",
|
| 121 |
+
"caption": "Figure 1: Model confidence and Accuracy comparison on WiC datasets.",
|
| 122 |
+
"url": "http://arxiv.org/html/2409.13972v1/extracted/5869498/tbl/embedding/google-t5_t5-large_wic_probe_test.png"
|
| 123 |
+
},
|
| 124 |
+
"1(d)": {
|
| 125 |
+
"figure_path": "2409.13972v1_figure_1(d).png",
|
| 126 |
+
"caption": "Figure 1: Model confidence and Accuracy comparison on WiC datasets.",
|
| 127 |
+
"url": "http://arxiv.org/html/2409.13972v1/extracted/5869498/tbl/embedding/bert-base-cased_wic_query_test.png"
|
| 128 |
+
},
|
| 129 |
+
"1(e)": {
|
| 130 |
+
"figure_path": "2409.13972v1_figure_1(e).png",
|
| 131 |
+
"caption": "Figure 1: Model confidence and Accuracy comparison on WiC datasets.",
|
| 132 |
+
"url": "http://arxiv.org/html/2409.13972v1/extracted/5869498/tbl/embedding/gpt2_wic_query_test.png"
|
| 133 |
+
},
|
| 134 |
+
"1(f)": {
|
| 135 |
+
"figure_path": "2409.13972v1_figure_1(f).png",
|
| 136 |
+
"caption": "Figure 1: Model confidence and Accuracy comparison on WiC datasets.",
|
| 137 |
+
"url": "http://arxiv.org/html/2409.13972v1/extracted/5869498/tbl/embedding/google-t5_t5-large_wic_query_test.png"
|
| 138 |
+
}
|
| 139 |
+
},
|
| 140 |
+
"validation": true,
|
| 141 |
+
"references": [
|
| 142 |
+
{
|
| 143 |
+
"1": {
|
| 144 |
+
"title": "The internal state of an LLM knows when it\u2019s lying.",
|
| 145 |
+
"author": "Amos Azaria and Tom Mitchell. 2023.",
|
| 146 |
+
"venue": "In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 967\u2013976, Singapore. Association for Computational Linguistics.",
|
| 147 |
+
"url": "https://doi.org/10.18653/v1/2023.findings-emnlp.68"
|
| 148 |
+
}
|
| 149 |
+
},
|
| 150 |
+
{
|
| 151 |
+
"2": {
|
| 152 |
+
"title": "Language models are few-shot learners.",
|
| 153 |
+
"author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.",
|
| 154 |
+
"venue": "In Advances in Neural Information Processing Systems, volume 33, pages 1877\u20131901. Curran Associates, Inc.",
|
| 155 |
+
"url": "https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf"
|
| 156 |
+
}
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"3": {
|
| 160 |
+
"title": "Can ai assistants know what they don\u2019t know?",
|
| 161 |
+
"author": "Qinyuan Cheng, Tianxiang Sun, Xiangyang Liu, Wenwei Zhang, Zhangyue Yin, Shimin Li, Linyang Li, Kai Chen, and Xipeng Qiu. 2024.",
|
| 162 |
+
"venue": "arXiv preprint arXiv:2401.13275.",
|
| 163 |
+
"url": null
|
| 164 |
+
}
|
| 165 |
+
},
|
| 166 |
+
{
|
| 167 |
+
"4": {
|
| 168 |
+
"title": "Results of the WNUT2017 shared task on novel and emerging entity recognition.",
|
| 169 |
+
"author": "Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017.",
|
| 170 |
+
"venue": "In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140\u2013147, Copenhagen, Denmark. Association for Computational Linguistics.",
|
| 171 |
+
"url": "https://doi.org/10.18653/v1/W17-4418"
|
| 172 |
+
}
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"5": {
|
| 176 |
+
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding.",
|
| 177 |
+
"author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a.",
|
| 178 |
+
"venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171\u20134186, Minneapolis, Minnesota. Association for Computational Linguistics.",
|
| 179 |
+
"url": "https://doi.org/10.18653/v1/N19-1423"
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"6": {
|
| 184 |
+
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding.",
|
| 185 |
+
"author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b.",
|
| 186 |
+
"venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171\u20134186, Minneapolis, Minnesota. Association for Computational Linguistics.",
|
| 187 |
+
"url": "https://doi.org/10.18653/v1/N19-1423"
|
| 188 |
+
}
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"7": {
|
| 192 |
+
"title": "Truthful ai: Developing and governing ai that does not lie.",
|
| 193 |
+
"author": "Owain Evans, Owen Cotton-Barratt, Lukas Finnveden, Adam Bales, Avital Balwit, Peter Wills, Luca Righetti, and William Saunders. 2021.",
|
| 194 |
+
"venue": "arXiv preprint arXiv:2110.06674.",
|
| 195 |
+
"url": null
|
| 196 |
+
}
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"8": {
|
| 200 |
+
"title": "Placing search in context: The concept revisited.",
|
| 201 |
+
"author": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001.",
|
| 202 |
+
"venue": "In Proceedings of the 10th international conference on World Wide Web, pages 406\u2013414.",
|
| 203 |
+
"url": null
|
| 204 |
+
}
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"9": {
|
| 208 |
+
"title": "Analogy-based detection of morphological and semantic relations with word embeddings: What works and what doesn\u2019t.",
|
| 209 |
+
"author": "Anna Gladkova, Aleksandr Drozd, and Satoshi Matsuoka.",
|
| 210 |
+
"venue": "In Proceedings of the NAACL-HLT SRW, address = San Diego, California, June 12-17, 2016, publisher = ACL, year = 2016, pages = 47-54 doi = 10.18653/v1/N16-2002, url = https://www.aclweb.org/anthology/N/N16/N16-2002.pdf,.",
|
| 211 |
+
"url": null
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"10": {
|
| 216 |
+
"title": "On calibration of modern neural networks.",
|
| 217 |
+
"author": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017.",
|
| 218 |
+
"venue": "In International conference on machine learning, pages 1321\u20131330. PMLR.",
|
| 219 |
+
"url": null
|
| 220 |
+
}
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"11": {
|
| 224 |
+
"title": "Prompting is not a substitute for probability measurements in large language models.",
|
| 225 |
+
"author": "Jennifer Hu and Roger Levy. 2023.",
|
| 226 |
+
"venue": "In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5040\u20135060, Singapore. Association for Computational Linguistics.",
|
| 227 |
+
"url": "https://doi.org/10.18653/v1/2023.emnlp-main.306"
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"12": {
|
| 232 |
+
"title": "Language models (mostly) know what they know.",
|
| 233 |
+
"author": "Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022.",
|
| 234 |
+
"venue": "arXiv preprint arXiv:2207.05221.",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"13": {
|
| 240 |
+
"title": "Prompt waywardness: The curious case of discretized interpretation of continuous prompts.",
|
| 241 |
+
"author": "Daniel Khashabi, Xinxi Lyu, Sewon Min, Lianhui Qin, Kyle Richardson, Sean Welleck, Hannaneh Hajishirzi, Tushar Khot, Ashish Sabharwal, Sameer Singh, and Yejin Choi. 2022.",
|
| 242 |
+
"venue": "In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3631\u20133643, Seattle, United States. Association for Computational Linguistics.",
|
| 243 |
+
"url": "https://doi.org/10.18653/v1/2022.naacl-main.266"
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"14": {
|
| 248 |
+
"title": "Cognitive dissonance: Why do language model outputs disagree with internal representations of truthfulness?",
|
| 249 |
+
"author": "Kevin Liu, Stephen Casper, Dylan Hadfield-Menell, and Jacob Andreas. 2023.",
|
| 250 |
+
"venue": "In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4791\u20134797, Singapore. Association for Computational Linguistics.",
|
| 251 |
+
"url": "https://doi.org/10.18653/v1/2023.emnlp-main.291"
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"15": {
|
| 256 |
+
"title": "Better word representations with recursive neural networks for morphology.",
|
| 257 |
+
"author": "Thang Luong, Richard Socher, and Christopher Manning. 2013.",
|
| 258 |
+
"venue": "In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104\u2013113, Sofia, Bulgaria. Association for Computational Linguistics.",
|
| 259 |
+
"url": "https://aclanthology.org/W13-3512"
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"16": {
|
| 264 |
+
"title": "The geometry of truth: Emergent linear structure in large language model representations of true/false datasets.",
|
| 265 |
+
"author": "Samuel Marks and Max Tegmark. 2024.",
|
| 266 |
+
"venue": "Preprint, arXiv:2310.06824.",
|
| 267 |
+
"url": "https://arxiv.org/abs/2310.06824"
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"17": {
|
| 272 |
+
"title": "Reducing Conversational Agents\u2019 Overconfidence Through Linguistic Calibration.",
|
| 273 |
+
"author": "Sabrina J. Mielke, Arthur Szlam, Emily Dinan, and Y-Lan Boureau. 2022.",
|
| 274 |
+
"venue": "Transactions of the Association for Computational Linguistics, 10:857\u2013872.",
|
| 275 |
+
"url": "https://doi.org/10.1162/tacl_a_00494"
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"18": {
|
| 280 |
+
"title": "Distributed representations of words and phrases and their compositionality.",
|
| 281 |
+
"author": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013.",
|
| 282 |
+
"venue": "In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.",
|
| 283 |
+
"url": "https://proceedings.neurips.cc/paper_files/paper/2013/file/9aa42b31882ec039965f3c4923ce901b-Paper.pdf"
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"19": {
|
| 288 |
+
"title": "Revisiting the calibration of modern neural networks.",
|
| 289 |
+
"author": "Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. 2021.",
|
| 290 |
+
"venue": "Advances in Neural Information Processing Systems, 34:15682\u201315694.",
|
| 291 |
+
"url": null
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"20": {
|
| 296 |
+
"title": "GloVe: Global vectors for word representation.",
|
| 297 |
+
"author": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014.",
|
| 298 |
+
"venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532\u20131543, Doha, Qatar. Association for Computational Linguistics.",
|
| 299 |
+
"url": "https://doi.org/10.3115/v1/D14-1162"
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"21": {
|
| 304 |
+
"title": "Deep contextualized word representations.",
|
| 305 |
+
"author": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018.",
|
| 306 |
+
"venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227\u20132237, New Orleans, Louisiana. Association for Computational Linguistics.",
|
| 307 |
+
"url": "https://doi.org/10.18653/v1/N18-1202"
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"22": {
|
| 312 |
+
"title": "WiC: the word-in-context dataset for evaluating context-sensitive meaning representations.",
|
| 313 |
+
"author": "Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019.",
|
| 314 |
+
"venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1267\u20131273, Minneapolis, Minnesota. Association for Computational Linguistics.",
|
| 315 |
+
"url": "https://doi.org/10.18653/v1/N19-1128"
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"23": {
|
| 320 |
+
"title": "Language models are unsupervised multitask learners.",
|
| 321 |
+
"author": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019.",
|
| 322 |
+
"venue": null,
|
| 323 |
+
"url": null
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"24": {
|
| 328 |
+
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer.",
|
| 329 |
+
"author": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2023.",
|
| 330 |
+
"venue": "Preprint, arXiv:1910.10683.",
|
| 331 |
+
"url": "https://arxiv.org/abs/1910.10683"
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"25": {
|
| 336 |
+
"title": "Sentence-BERT: Sentence embeddings using Siamese BERT-networks.",
|
| 337 |
+
"author": "Nils Reimers and Iryna Gurevych. 2019.",
|
| 338 |
+
"venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982\u20133992, Hong Kong, China. Association for Computational Linguistics.",
|
| 339 |
+
"url": "https://doi.org/10.18653/v1/D19-1410"
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"26": {
|
| 344 |
+
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition.",
|
| 345 |
+
"author": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003.",
|
| 346 |
+
"venue": "In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142\u2013147.",
|
| 347 |
+
"url": "https://aclanthology.org/W03-0419"
|
| 348 |
+
}
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"27": {
|
| 352 |
+
"title": "BERT is to NLP what AlexNet is to CV: Can pre-trained language models identify analogies?",
|
| 353 |
+
"author": "Asahi Ushio, Luis Espinosa Anke, Steven Schockaert, and Jose Camacho-Collados. 2021.",
|
| 354 |
+
"venue": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3609\u20133624, Online. Association for Computational Linguistics.",
|
| 355 |
+
"url": "https://doi.org/10.18653/v1/2021.acl-long.280"
|
| 356 |
+
}
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"28": {
|
| 360 |
+
"title": "GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model.",
|
| 361 |
+
"author": "Ben Wang and Aran Komatsuzaki. 2021.",
|
| 362 |
+
"venue": "https://github.com/kingoflolz/mesh-transformer-jax.",
|
| 363 |
+
"url": null
|
| 364 |
+
}
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"29": {
|
| 368 |
+
"title": "Gpt-ner: Named entity recognition via large language models.",
|
| 369 |
+
"author": "Shuhe Wang, Xiaofei Sun, Xiaoya Li, Rongbin Ouyang, Fei Wu, Tianwei Zhang, Jiwei Li, and Guoyin Wang. 2023a.",
|
| 370 |
+
"venue": "Preprint, arXiv:2304.10428.",
|
| 371 |
+
"url": "https://arxiv.org/abs/2304.10428"
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"30": {
|
| 376 |
+
"title": "Knowledge editing for large language models: A survey.",
|
| 377 |
+
"author": "Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, and Jundong Li. 2023b.",
|
| 378 |
+
"venue": "Preprint, arXiv:2310.16218.",
|
| 379 |
+
"url": "https://arxiv.org/abs/2310.16218"
|
| 380 |
+
}
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"31": {
|
| 384 |
+
"title": "Are language models worse than humans at following prompts? it\u2019s complicated.",
|
| 385 |
+
"author": "Albert Webson, Alyssa Loo, Qinan Yu, and Ellie Pavlick. 2023.",
|
| 386 |
+
"venue": "In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7662\u20137686, Singapore. Association for Computational Linguistics.",
|
| 387 |
+
"url": "https://doi.org/10.18653/v1/2023.findings-emnlp.514"
|
| 388 |
+
}
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"32": {
|
| 392 |
+
"title": "Finetuned language models are zero-shot learners.",
|
| 393 |
+
"author": "Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022.",
|
| 394 |
+
"venue": "In International Conference on Learning Representations.",
|
| 395 |
+
"url": "https://openreview.net/forum?id=gEZrGCozdqR"
|
| 396 |
+
}
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"33": {
|
| 400 |
+
"title": "Empirical study of zero-shot NER with ChatGPT.",
|
| 401 |
+
"author": "Tingyu Xie, Qi Li, Jian Zhang, Yan Zhang, Zuozhu Liu, and Hongwei Wang. 2023.",
|
| 402 |
+
"venue": "In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7935\u20137956, Singapore. Association for Computational Linguistics.",
|
| 403 |
+
"url": "https://doi.org/10.18653/v1/2023.emnlp-main.493"
|
| 404 |
+
}
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"34": {
|
| 408 |
+
"title": "Do large language models know what they don\u2019t know?",
|
| 409 |
+
"author": "Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. 2023.",
|
| 410 |
+
"venue": "In Findings of the Association for Computational Linguistics: ACL 2023, pages 8653\u20138665, Toronto, Canada. Association for Computational Linguistics.",
|
| 411 |
+
"url": "https://doi.org/10.18653/v1/2023.findings-acl.551"
|
| 412 |
+
}
|
| 413 |
+
}
|
| 414 |
+
],
|
| 415 |
+
"url": "http://arxiv.org/html/2409.13972v1"
|
| 416 |
+
}
|
20240921/2409.13975v1.json
ADDED
|
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "ProTEA: Programmable Transformer Encoder Acceleration on FPGA",
|
| 3 |
+
"abstract": "Transformer neural networks (TNN) have been widely utilized on a diverse range of applications, including natural language processing (NLP), machine translation, and computer vision (CV). Their widespread adoption has been primarily driven by the exceptional performance of their multi-head self-attention block used to extract key features from sequential data. The multi-head self-attention block is followed by feedforward neural networks, which play a crucial role in introducing non-linearity to assist the model in learning complex patterns. Despite the popularity of TNNs, there has been limited numbers of hardware accelerators targeting these two critical blocks. Most prior works have concentrated on sparse architectures that are not flexible for popular TNN variants. This paper introduces ProTEA, a runtime programmable accelerator tailored for the dense computations of most of state-of-the-art transformer encoders. ProTEA is designed to reduce latency by maximizing parallelism. We introduce an efficient tiling of large matrices that can distribute memory and computing resources across different hardware components within the FPGA. We provide run time evaluations of ProTEA on a Xilinx Alveo U55C high-performance data center accelerator card. Experimental results demonstrate that ProTEA can host a wide range of popular transformer networks and achieve near optimal performance with a tile size of 64 in the multi-head self-attention block and 6 in the feedforward networks block when configured with 8 parallel attention heads, 12 layers, and an embedding dimension of 768 on the U55C. Comparative results are provided showing ProTEA is 2.5 faster than an NVIDIA Titan XP GPU. Results also show that it achieves 1.3 \u2013 2.8 speed up compared with current state-of-the-art custom designed FPGA accelerators.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In recent years, transformer neural networks have become widely utilized on a diverse range of applications including natural language processing (NLP) [1 ###reference_b1###, 2 ###reference_b2###], neural machine translation [3 ###reference_b3###], and image processing [4 ###reference_b4###]. They are becoming favored over traditional recurrent neural network (RNN) and long short-term memory (LSTM) models for NLP tasks, and convolutional neural networks (CNN) for CV tasks. Their popularity is being driven by their ability to enable high computational parallelism for both the training and inference steps. Their natural exposure of higher levels of parallelism makes them well-suited for acceleration on hardware such as GPUs and FPGAs. There exist many transformer-based models such as full transformers containing both encoder and decoder [2 ###reference_b2###], BERT [5 ###reference_b5###], RoBERTa [6 ###reference_b6###], Swin Transformers [7 ###reference_b7###], structBERT [8 ###reference_b8###] etc. These models incorporate two notable features: a multi-headed attention (MHA) mechanism and feedforward neural networks (FFN) that distinguishes them from traditional CNNs, RNNs, and LSTMs.\nThese MHA and FFN mechanisms are computationally expensive due to intensive matrix-matrix multiplications and complex data flows [9 ###reference_b9###]. They account for a significant portion of runtime in many existing TNNs [10 ###reference_b10###]. Unfortunately, executing TNNs is inefficient on general-purpose platforms such as GPUs and CPUs because of their high power consumption, low computational efficiency, underutilized memory bandwidth, and significant compilation overheads[11 ###reference_b11###].\nIn addition to GPUs, FPGAs have become popular commercial off the shelf components used to accelerate DNNs. FPGAs offer the ability to exploit high level of parallelism to provide low run time inference latencies with efficient power consumption [12 ###reference_b12###, 13 ###reference_b13###]. Many studies have investigated how to increase the parallelization of CNNs, LSTMs, Graph Convolutional Networks [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###] on FPGAs to enhance performance. Recently, TNNs have been successfully deployed on FPGAs and application-specific integrated circuit (ASIC) hardware accelerators[18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###]. Most implementations compress the model by using different weight pruning strategies, and reduce latency by incorporating sparse matrices. Thus, they use a specialized sparse architecture specific to each application. However, different applications require different sparsity patterns, necessitating the redesign of the hardware architecture for optimal results. This comes at the cost of time-consuming synthesis, and requires skills in digital design and computer architecture as well as detailed knowledge of each target logic family. Therefore, there is a need for a versatile accelerator capable of efficiently managing dense matrix computations across a range of TNN applications.\nThe study in [18 ###reference_b18###] uses logic resources to implement a systolic array for parallelism, which can lead to underutilization of digital signal processing (DSP) units that are capable of high-speed computation at higher frequencies. DSP utilization also depends on the implementation method. For instance, many accelerators [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###] employ high-level synthesis (HLS) tools, while others use hardware description language (HDL) [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###] for design. Although HLS requires less implementation time compared to HDL, writing efficient HLS code that effectively manages specific FPGA resources, such as DSPs, for optimal performance remains challenging [15 ###reference_b15###].\nThe analysis in [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###] demonstrated that MHA and FFN occupy major portions of the memory and they have the highest computational demands. Since on-chip memory of FPGAs typically does not exceed 36MB and off-chip memory bandwidth is sometimes limited, matrices must be partitioned into tiles. However, designing an optimal partitioning scheme for MHA and FFN that aligns effectively with the architecture presents a significant challenge.\nIn this paper, HLS tool was used to design ProTEA, a programmable accelerator for transformer encoders. The code of the design written in HLS was optimized to increase the parallel computations by the DSPs. ProTEA incorporates efficient tiling for both the attention mechanism and linear transformations. It ensures enhanced parallel computations and communication so that the transformer encoding can be accelerated as much as possible.\nThe contributions of this paper are:\nA novel accelerator architecture for transformer encoders that maximizes DSP utilization to enhance parallel processing and achieve low latency.\nAn efficient tiling strategy for weight matrices in both the multi-head attention layer and the feedforward neural network layer, enabling the accommodation of large models within on-chip memory.\nA parameterized HLS code that allows for design-time adjustments of parameters in the HLS tool.\nA runtime programmable feature enabling dynamic adjustment of parameters in software, facilitating the evaluation of different models without the need for hardware re-synthesis."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Background",
|
| 15 |
+
"text": "Transformers consist of several fundamental components, as depicted in Fig. 1 ###reference_###. An input sequence of tokens is first converted into embeddings. The positional encoder adds positional information to these embeddings, enabling the model to account for the order of tokens in a sequence. This encoder generates vectors that provide context based on each word\u2019s position in a sentence. These vectors are then linearly transformed into three tensors: Q (queries), K (keys), and V (values) by multiplying the embedding matrix with three distinct weight matrices. The encoder block processes these tensors, transforming them into a higher-level representation that captures essential information. This transformation is crucial for accurately capturing features and contextual relationships within the input sequence. The encoder architecture is composed of two primary sub-layers: (1) the self-attention mechanism, and (2) the position-wise feed-forward network.\nThe self-attention mechanism allows the model to simultaneously evaluate different parts of an input sequence, capturing long-range relationships by calculating attention scores and using multi-head projections for various input representations. This capability enables the model to effectively learn complex patterns, dependencies, and relationships. The position-wise feed-forward network (FFN), similar to a multilayer perceptron (MLP), applies linear transformations independently to each position in the input sequence. This network performs two linear transformations, primarily involving matrix-vector multiplication. The first transformation includes activation functions such as the Rectified Linear Unit (ReLU) or Gaussian Error Linear Unit (GeLU), while the second transformation does not.\nAdditionally, each sub-layer incorporates a residual connection combined with layer normalization (LN), addressing the vanishing gradient problem during training. Residual connections and LN layers are added after each MHA and FFN layer, involving the addition of matrix elements and nonlinear functions.\n###figure_1### The decoder block, depicted in Fig. 1 ###reference_###, is tasked with generating the output sequence using the encoded representations provided by the encoder. Similar to the encoder, the decoder comprises a stack of N identical layers. Each layer in the decoder includes three sub-layers: (1) the Masked Attention Mechanism, which is similar to the encoder\u2019s self-attention but incorporates a masking feature to prevent the output from depending on future outputs; (2) an attention layer that focuses on the encoder\u2019s output, allowing the decoder to highlight relevant parts of the input sequence for each output element; and (3) a position-wise feed-forward network.\n###figure_2### As shown in Fig 2 ###reference_###, the scaled dot-product attention in each head is a vital component of the multi-head attention layer. The attention weights are calculated by taking the dot product of the Q and K matrices and then scaling the result by the square root of the second dimension of the K matrix. This scaling is crucial to prevent the dot products from becoming too large, which helps stabilize gradients during training. The scaled dot products are then passed through the softmax function to compute the attention weights. These weights are used to perform a weighted sum of the value vectors. The final output is the projection of the concatenated sequences from all heads.\nThe output of MHA can be represented as Equation 1 & 2. The input sequence X is linearly mapped into matrices using weights and biases. The parameter is the 2nd dimension of and . is a hyperparameter called embedding dimension and h is number of heads. \u2018i\u2019 is the index for attention heads."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Related work",
|
| 21 |
+
"text": "Various FPGA and ASIC accelerators have been designed for TNNs. The ASIC design in [19 ###reference_b19###] leveraged parallelism and specialized datapaths to achieve significant gains in performance and energy efficiency. Another ASIC, ELSA [10 ###reference_b10###], employed specialized approximation algorithms to reduce computational demands. The SpAtten [31 ###reference_b31###] ASIC utilized sparsity and quantization to decrease computations and memory access. Additionally, the hardware-software co-design framework Sanger [9 ###reference_b9###] facilitated dynamic sparsity through a reconfigurable ASIC architecture. Despite these advancements, these solutions primarily focus on accelerating sparse attention mechanisms and do not address the deployment of full transformer models. The FPGA accelerator proposed by Lu et al. [18 ###reference_b18###] is the first hardware architecture to accelerate both the MHA and FFN layers of the transformer. However, their implementation was done using HDL for a single attention head. A shared computing architecture is implemented in [32 ###reference_b32###], where a parallel computing array is shared between MHA and FFNs for a CNN application. A novel structural pruning method was proposed by [33 ###reference_b33###] and the associated accelerator on FPGA was designed to reduce memory footprint. Peng et al. [21 ###reference_b21###] explored column-balanced block-wise pruning for transformers and designed an FPGA accelerator for optimized block-wise matrix multiplication. An algorithm hardware framework [28 ###reference_b28###] utilizes latency and accuracy constraints to determine the optimal sparsity ratio and select an appropriate FPGA platform. The energy-efficient acceleration framework FTRANS [29 ###reference_b29###] features an improved block-circulant matrix method for algorithm-level sparsity, along with a custom-designed accelerator tailored for this approach. Wojcicki et al.[23 ###reference_b23###] deployed a small TNN model on FPGA using HLS for experiments at the Large Hadron Collider. All of the existing hardware architectures are designed for a specific TNN and a specific sparsity pattern. They lack the flexibility to reconfigure the computing structure for different applications during runtime. EFA-Trans [25 ###reference_b25###] is compatible with dense and sparse computing patterns, but it would need resynthesis of the hardware to switch between two options. Furthermore, none of them explored which tile size and what utilization DSPs could achieve optimum parallelism."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV Accelerator Architecture",
|
| 27 |
+
"text": "The core of the accelerator is designed in C language on Vitis HLS 2022.2.1 tool. C simulation verifies the correctness of the algorithm, while C/RTL co-simulation ensures the functionality of the synthesized hardware. This section describes the high-level synthesis design technique that generates an optimized architecture utilizing most of the DSPs in the computation engines, ensuring high parallelism. The overall structure of the accelerator contains two main processing modules - the multihead attention (MHA) module and the feedforward network (FFN) module, which are shown in Fig. 3 ###reference_### and Fig. 4 ###reference_### respectively. The overall system was designed in Vivado 2022.1.2 design suite. It contains a custom IP block for the accelerator, which is exported from HLS. The inputs and weights are fetched from off-chip high-bandwidth memory (HBM) using AXI4 master interfaces [34 ###reference_b34###] when the load instruction from the accelerator controller is received according to demand. The accelerator receives control signals from the processor through an AXI-lite slave interface [35 ###reference_b35###]. Each hyperparameters of TNN can be programmed during runtime up to a maximum value by MicroBlaze (B) softcore processor [36 ###reference_b36###].\n###figure_3###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "IV-A Attention Module",
|
| 33 |
+
"text": "The attention module (Fig. 3 ###reference_###) comprises three computation engines (CE), labeled as , , and based on their outputs. The number of these engines is determined by the number of attention heads (h). Each engine features an array of processing elements (PE), where each PE includes a DSP48 for performing multiplication and accumulation (MAC) operations. The quantity of PEs denoted as \u2018t\u2019 is influenced by the unrolling factor of the inner loop and the initiation interval of the pipelined outer loop. Since the data access patterns and computational needs vary across different engines, each has separate function definition in HLS. This ensures that the synthesized RTL modules of the engines contain distinct PE arrays, enabling individual optimization. Input data and weights are stored in multiple BRAMs/LUTRAMs to support parallel access.\nEach PE operates independently, equipped with its own memories, controller, and computing units. In HLS, the weights (, , ) for generating the query (Q), key (K), and value (V) matrices are defined as separate two-dimensional arrays of size (). Here, represents the tile size in the attention module. It is the dimension of the sub-matrices into which the larger weight matrices are partitioned. The number of heads, tile size, and array partitioning directives in HLS determine how these arrays are divided to create multiple two-port BRAMs. To address the limited ports of BRAMs, array partitioning and data loading are optimized to ensure that data needed simultaneously by a DSP is stored in separate BRAMs. The Q, K, and V matrices, sized (), are stored in intermediate buffers. Here, SL stands for sequence length."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.1.1",
|
| 37 |
+
"parent_section_id": "4.1",
|
| 38 |
+
"section_name": "IV-A1 QKVCE engine",
|
| 39 |
+
"text": "engine generates the query, key, and value matrices. This engine contains the , , buffers, and input () buffers from which data is accessed in parallel by parallel DSP units. The arrays used in this engine are divided into subarrays using our tiling technique to fit into on-chip memories. The number of loop iterations in the engine is determined by , resulting in a total of () tiles or iterations. During each iteration, distinct data is loaded into the , , , and buffers. Computations then commence in the PEs, while biases for the Q, K, and V matrices are simultaneously loaded into registers from off-chip memory. These biases are subsequently added to the Q, K, and V matrices. Algorithm 1 ###reference_### illustrates the computations of this engine, where the second loop (line 6) is pipelined, resulting in the full unrolling of the innermost loop (line 8) and generating () PEs."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.1.2",
|
| 43 |
+
"parent_section_id": "4.1",
|
| 44 |
+
"section_name": "IV-A2 QKCE engine",
|
| 45 |
+
"text": "The engine performs matrix-matrix multiplication between the Q and K matrices. Since these matrices are relatively small, they are not tiled. Algorithm 2 ###reference_### outlines the operations performed.\nThe innermost loop (line 6) is fully unrolled, resulting in () PEs for this engine. The engine generates a matrix (S) of attention weights, which is stored in either BRAM or registers. These values are then passed to the softmax function. The softmax function, implemented in HLS, utilizes LUTs and flip-flops (FFs) to compute the result."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.1.3",
|
| 49 |
+
"parent_section_id": "4.1",
|
| 50 |
+
"section_name": "IV-A3 SVCE engine",
|
| 51 |
+
"text": "The output matrix (S) from the softmax operation is passed to the engine (Algorithm 3 ###reference_###), where it undergoes matrix-matrix multiplication with the value (V) matrix. In Algorithm 3 ###reference_###, the innermost loop (line 6) is fully unrolled, resulting in (SL) PEs. The output from this engine is termed the attention score."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "IV-B Feedforward Network Module",
|
| 57 |
+
"text": "There are three CEs, denoted as , , and in FFN to perform the operations of feedforward networks of different dimensions (Fig. 4 ###reference_###). The definitions of the functions representing the CEs have different dimensions of arrays for the inputs and outputs in HLS. These arrays are converted into BRAMs/LUTRAMs after synthesis. The number of computations inside each engine is different, which is why each has a separate function in HLS. They contain a different number of processing elements (PE) after synthesis because of different unrolling factors of the innermost loop. The weights are stored in a two-dimensional array () of size () in HLS, where is tile size in FFN. and are followed by layer normalization (LN) modules. Algorithm 4 ###reference_### describes the general coding approach for an FFN engine."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2.1",
|
| 61 |
+
"parent_section_id": "4.2",
|
| 62 |
+
"section_name": "IV-B1 FFN1CE engine",
|
| 63 |
+
"text": "engine performs the first linear transformation on the attention scores. The arrays used by the PEs are tiled along both dimensions. Thus, this engine is accessed times to finish the complete operation. The second for loop of the HLS code is pipelined causing the innermost for loop to be fully unrolled. This generates PEs which equals to ."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2.2",
|
| 67 |
+
"parent_section_id": "4.2",
|
| 68 |
+
"section_name": "IV-B2 FFN2CE engine",
|
| 69 |
+
"text": "engine performs second linear transformation on the normalized outputs of engine. The arrays used by the PEs are tiled along both dimensions. Thus, this engine is accessed times to finish the complete operation. This engine also contains PEs which equals to , because the trip count of the innermost loop is and it is fully unrolled."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2.3",
|
| 73 |
+
"parent_section_id": "4.2",
|
| 74 |
+
"section_name": "IV-B3 FFN3CE engine",
|
| 75 |
+
"text": "engine performs final linear transformation on the normalized outputs of engine. The arrays used by the PEs are tiled along both dimensions. Thus, this engine is accessed times to finish the complete operation. The complete unroll of the innermost loop generates PEs in it, which equals to .\n###figure_4###"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.3",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "IV-C Tiling Technique",
|
| 81 |
+
"text": "Since transformer models are typically large, tiling is used to manage the utilization of on-chip memory and computing units effectively. It ensures that the HLS tool can efficiently partition arrays and pipeline or unroll loops to minimize latency while keeping compilation time short. Figure 5 ###reference_### illustrates our distinctive tiling strategy for the MHA module. The weight matrices are divided into tiles, enabling BRAMs to be loaded with partial data fetched from off-chip memory. Tiling is applied only along the second dimension (columns) of the matrix because the first dimension (rows) is already reduced by the number of heads. Consequently, each matrix is loaded () times. The input buffers for each attention head are defined as a two-dimensional array of size (SL ), and tiling is similarly applied along the column of the matrix, resulting in () loads. During each iteration, data for one tile is loaded initially. The PEs then compute on this data, storing the results in intermediate buffers, which are accumulated with results from previous iterations. Ultimately, the final output is the cumulative sum of the results computed across all tiles.\n###figure_5### The FFNs that follow the attention layer are the most time- and resource-intensive components. The weight matrices for the FFN are defined as two-dimensional arrays with dimensions . These matrices are tiled along both dimensions (rows and columns), requiring two loops to iteratively load each tile. The first FFN module is reused times because both loops iterate times. The second and third FFN modules are reused times, reflecting the iteration counts of either or . Figure 6 ###reference_### illustrates our specific tiling strategy for the FFN. Results are first accumulated along the columns, followed by accumulation along the rows for all tiles.\n###figure_6###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.4",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-D Runtime Configurable Capability",
|
| 87 |
+
"text": "The runtime-programmable parameters such as the number of attention heads, number of layers, embedding dimension, and sequence length can be sent to ProTEA via software running on the B processor. TNN models are trained using the PyTorch framework, and the resulting models should be saved as \u2019.pth\u2019 files. These files are then processed by a Python interpreter to extract key parameters such as the number of attention heads, layers, embedding dimension, and sequence length. While these parameters will vary across applications, ProTEA does not require resynthesis for each model; only minor software modifications are necessary. The software, developed in C++ using the Xilinx SDK tool, utilizes the extracted data to generate instructions and control signals. These signals guide the processor in activating the relevant parts of the accelerator hardware."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.5",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "IV-E Tile Size Determination",
|
| 93 |
+
"text": "In ProTEA, the programmable parameters can be adjusted at runtime, whereas the tile size must be set before synthesis, as it cannot be modified without resynthesizing the entire hardware. The graph in Fig. 7 ###reference_### illustrates how variations in and impact system frequency (MHz) and latency (normalized to the minimum value). The number of tiles in MHA () was varied from 6 to 48, and for each MHA tile count, the number of tiles in FFN () ranged from 2 to 6. The results indicate that the optimal configuration for achieving the highest frequency (blue color) and lowest latency (green color) was 12 tiles in MHA and 6 tiles in FFN. This setup achieved a maximum frequency of 200 MHz, allowing ProTEA to execute all transformer neural network models discussed in Section V ###reference_###. Moreover, experiments showed that of 64 and of 128 are optimal for HLS, allowing for efficient array partitioning within a reasonable compilation time (approximately 36 hours) for a state-of-the-art (SOTA) transformer encoder.\n###figure_7###"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Evaluation and Results",
|
| 99 |
+
"text": "Table I ###reference_### presents the runtime programmability, resource utilization, and performance metrics of ProTEA. The reported latency reflects the computation time, accounting for the overlap of data loading and computation. The synthesis was conducted with fixed tile sizes of = 64 and = 128, as these values are set before synthesis and cannot be altered afterward. Data was quantized to 8-bit fixed-point format; while this might result in accuracy loss depending on the application, it was not a primary focus. For applications requiring a larger bit width, the design can be easily modified in the HLS code, which will impact both resource utilization and latency. The accelerator\u2019s design parameters, including the embedding dimension (), number of heads (h), number of layers (N), and sequence length (SL), were initially configured with fixed values \u2014 768, 8, 12, and 64 respectively \u2014 based on a variant of BERT[5 ###reference_b5###] and the available FPGA resources. These parameters were then adjusted dynamically at runtime using B. This approach allows ProTEA to be synthesized once for a fixed set of resources while retaining the flexibility to adapt to various architectures as needed.\nTests 1, 2, and 3 demonstrate how varying the number of attention heads within the same accelerator dynamically impacts latency and throughput, with throughput defined as the number of giga operations per second (GOPS). On the Alveo U55C, the lowest latency of 279 ms and the highest GOPS of 53 were achieved with 8 parallel heads. Tests 4 and 5 explore the effect of varying the number of layers, showing that latency decreases and GOPS increases as the number of layers is reduced. Tests 6 and 7 examine the impact of embedding dimensions, with latency increasing and GOPS decreasing as the embedding dimension grows. Finally, Tests 8 and 9 investigate the effect of varying sequence length, where performance deteriorates as sequence length increases.\nResource utilization remained consistent across Tests 1 to 9, as the accelerator was synthesized only once with a fixed tile size, while other parameters were reconfigured at runtime through software. The design achieved high resource utilization, with 40% of DSPs and 76% of LUTs in use. Further DSP utilization was limited by the available LUTs, and the optimal number of parallel attention heads was determined to be 8 on the Alveo U55C to avoid overutilization by the engine.\nTable II ###reference_### compares the performance of our accelerator, ProTEA, with other FPGA-based accelerators. Each of these accelerators is custom-built for a specific TNN model, with some designed specifically for sparse computations. Among them, only EFA-Trans [25 ###reference_b25###] is flexible enough to toggle the sparse preprocessing unit, allowing it to switch between sparse and dense computations. Since ProTEA was synthesized only once with a fixed set of hardware resources and bit width, and was implemented on a different platform, we evaluated performance metrics like latency, throughput (GOPS), and normalized throughput (GOPS per DSP) [15 ###reference_b15###] for a fair comparison. ProTEA achieved 2.8 and 1.7 improvements in speed and GOPS, respectively, compared to the accelerators proposed by Wojcicki et al. [23 ###reference_b23###] and Qi et al. [28 ###reference_b28###]. The GOPS/DSP ratio was also increased by 3.46 and 2 compared to these accelerators. On the other hand, EFA-Trans, which appears to be custom-designed using HDL methods, resulted in more efficient hardware with a lower level of abstraction, making it 3.5 faster than ProTEA. Peng et al. [21 ###reference_b21###] applied a high sparsity of 90% to their model, achieving a 14 speedup over ProTEA. If the same sparsity level were applied to ProTEA, its latency would mathematically be reduced to 0.448 ms (calculated as ), making it 1.4 slower. FTRANS [29 ###reference_b29###] compressed the model by 93%. The same compression would make ProTEA 9.4 faster because its latency would be 0.31 ms (calculated as ). Moreover, ProTEA demonstrated 2 higher GOPS/DSP than FTRANS, indicating more efficient DSP usage.\nTable III ###reference_### compares ProTEA with various GPUs and CPUs operating at frequencies between 1.3 and 3.2 GHz. ProTEA was tested with different TNN models, as referenced in the second column. We could easily adjust the embedding dimensions, number of heads & layers, and sequence length in runtime to align with the architectures in the referenced studies without altering the hardware, thus, ensuring a fair comparison. ProTEA is 0.79 and 6.65 slower than the Intel I5-5257U CPU and JETSON TX2 GPU respectively for model #1 because this study [21 ###reference_b21###] applied a pruning technique. It is 2.5 faster than the NVIDIA TITAN XP GPU for model #2, and 16 faster than the NVIDIA TITAN XP GPU for model #4. These improvements are attributed to higher parallelism, despite ProTEA operating at a lower frequency and lacking sparsity. For model #3, ProTEA performed slower than the Intel I5-4460 CPU and NVIDIA RTX 3060 GPU, potentially due to the use of aggressive sparsity and omission of certain computations in the referenced work."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "VI Conclusion & Future Works",
|
| 105 |
+
"text": "In this research, we developed a flexible FPGA-based accelerator for the encoder layer of a transformer neural network (TNN) using a high-level synthesis (HLS) tool. The accelerator architecture exploits FPGA parallelism and the parallel nature of the encoder itself. On the Alveo U55C platform, resources such as BRAMs, DSPs, and LUTs were maximized to enhance parallelism and minimize latency. The accelerator supports runtime programmability, allowing it to adapt to various topologies without requiring re-synthesis. An efficient tiling technique and data loading method for weight matrices were implemented to accommodate large models in on-chip memory, while preventing the overutilization of computational resources. Experimental results show that our design outperforms some CPUs and GPUs in terms of speed and throughput despite operating at a lower frequency and lacking sparsity optimizations. Additionally, it achieved 1.3 to 2.8 speed up compared to the fastest state-of-the-art FPGA-based accelerators. Although this paper focuses solely on encoder layers, future work will extend the architecture to support both encoder and decoder layers of the transformer, using the same design principles."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [],
|
| 109 |
+
"tables": {
|
| 110 |
+
"1": {
|
| 111 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Overall Results for Our Accelerator.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.1\" style=\"width:346.9pt;height:76.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-100.1pt,96.6pt) scale(0.634183479211427,0.28452719229239) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.1.1\">Test no.</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.2.1\">Sequence</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.3.1\">Embedding</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.4.1\">Number</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.5.1\">Number</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.6.1\">Data</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.7\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.7.1\">DSPs</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.8\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.8.1\">LUTs</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.9\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.9.1\">FFs</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.10\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.10.1\">Latency</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_l ltx_border_t\" id=\"S4.T1.1.1.1.1.11\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.11.1\">GOPS</span> \u2005\u2005 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.1.1.11.2\">\\bigstrut</span>[t]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.1.1\">Length</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.2.1\">Dimension</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.3.1\">of Heads</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.4.1\">of Layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.5.1\">Format</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.6.1\">(ms)</span></td>\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_l\" id=\"S4.T1.1.1.2.2.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">\u2005\u2005<span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.2.2.7.1\">\\bigstrut</span>[b]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.3.3\">\n<td class=\"ltx_td ltx_nopad_l ltx_nopad_r ltx_align_left\" colspan=\"11\" id=\"S4.T1.1.1.3.3.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_rule\" style=\"width:100%;height:2.0pt;background:black;display:inline-block;\">\u00a0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T1.1.1.4.4.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">#1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.2\" rowspan=\"3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.4.4.2.1\">64</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.3\" rowspan=\"3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.4.4.3.1\">768</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.5\" rowspan=\"3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.4.4.5.1\">12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.6\" rowspan=\"3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.4.4.6.1\">8bit fixed</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.7\" rowspan=\"3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.4.4.7.1\">3612 (40%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.8\" rowspan=\"3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.4.4.8.1\">993107 (76%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.9\" rowspan=\"3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.4.4.9.1\">704115 (27%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.10\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">279</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.11\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">53 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.4.4.11.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.5.5.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">#2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.5.5.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.5.5.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">285</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.5.5.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">51 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.5.5.4.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.6.6.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">#3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.6.6.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.6.6.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">295</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.6.6.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">49 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.6.6.4.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.7.7\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" colspan=\"11\" id=\"S4.T1.1.1.7.7.1\" style=\"padding-bottom:-10.00002pt;padding-left:2.0pt;padding-right:2.0pt;\">\u2005 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.7.7.1.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">#4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.2\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.8.8.2.1\">64</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.3\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.8.8.3.1\">768</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.4\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.8.8.4.1\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.6\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.8.8.6.1\">8bit fixed</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.7\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.8.8.7.1\">3612 (40%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.8\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.8.8.8.1\">993107 (76%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.9\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.8.8.9.1\">704115 (27%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.10\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">186</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.8.8.11\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">80 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.8.8.11.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.9.9.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">#5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.9.9.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.9.9.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">93</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.9.9.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">159 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.9.9.4.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.10.10\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" colspan=\"11\" id=\"S4.T1.1.1.10.10.1\" style=\"padding-bottom:-10.00002pt;padding-left:2.0pt;padding-right:2.0pt;\">\u2005 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.10.10.1.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">#6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.2\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.11.11.2.1\">64</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">512</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.4\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.11.11.4.1\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.5\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.11.11.5.1\">12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.6\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.11.11.6.1\">8bit fixed</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.7\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.11.11.7.1\">3612 (40%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.8\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.11.11.8.1\">993107 (76%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.9\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.11.11.9.1\">704115 (27%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.10\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">186</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.11.11.11\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">36 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.11.11.11.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.12.12.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">#7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.12.12.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">256</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.12.12.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.12.12.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">18 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.12.12.4.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.13.13\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" colspan=\"11\" id=\"S4.T1.1.1.13.13.1\" style=\"padding-bottom:-10.00002pt;padding-left:2.0pt;padding-right:2.0pt;\">\u2005 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.13.13.1.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.14.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">#8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">128</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.3\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.14.14.3.1\">768</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.4\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.14.14.4.1\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.5\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.14.14.5.1\">12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.6\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.14.14.6.1\">8bit fixed</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.7\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.14.14.7.1\">3612 (40%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.8\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.14.14.8.1\">993107 (76%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.9\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.1.1.14.14.9.1\">704115 (27%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.10\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">560</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.14.11\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">54 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.14.14.11.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.15.15.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">#9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.15.15.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.15.15.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">165</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.15.15.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">44 <span class=\"ltx_ERROR undefined\" id=\"S4.T1.1.1.15.15.4.1\">\\bigstrut</span>\n</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 112 |
+
"capture": "TABLE I: Overall Results for Our Accelerator."
|
| 113 |
+
},
|
| 114 |
+
"2": {
|
| 115 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Comparison with FPGA Accelerators.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.1\" style=\"width:260.2pt;height:79.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-136.1pt,104.2pt) scale(0.488640952489061,0.276624308692084) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.2\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.2.1\">Accelerator</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.3\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.3.1\">Precision</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.4\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.4.1\">FPGA</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.5\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.5.1\">\u00a0DSP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.6.1\">Latency</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.7\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.7.1\">GOPS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.1.1\">(GOPS/DSP)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.8\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.8.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.9\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.9.1\">Sparsity</span> <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.1.9.2\">\\bigstrut</span>[t]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.2.1.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.1.1.1\">(ms)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.2.1.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.1.2.1\">1000</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.2.1.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">\n<span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.2.1.3.1\">\\bigstrut</span>[b]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.2.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13975v1#bib.bib21\" title=\"\">21</a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.2.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">\u2013</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.2.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Alveo U200</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.2.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">3368</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.2.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">0.32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.2.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">555</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.2.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">164</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.2.8\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.1.1.3.2.8.1\">HLS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.2.9\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">90% <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.3.2.9.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.4.3.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.4.3.1.1\">ProTEA</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.4.3.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Fix8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.4.3.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Alveo U55C</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.4.3.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">3612</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.4.3.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">4.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.4.3.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.4.3.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.4.3.8\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">0% <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.4.3.8.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.5.4\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" colspan=\"9\" id=\"S4.T2.1.1.5.4.1\" style=\"padding-bottom:-13.00005pt;padding-left:2.0pt;padding-right:2.0pt;\">\u2005 <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.5.4.1.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.6.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13975v1#bib.bib23\" title=\"\">23</a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Float32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Alveo 250</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">4351</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">1.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">0.0006</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">0.00013</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.8\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.1.1.6.5.8.1\">HLS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.9\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.1.1.6.5.9.1\">0%</span> <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.6.5.9.2\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.7.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.7.6.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.7.6.1.1\">ProTEA</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.7.6.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Fix8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.7.6.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Alveo U55C</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.7.6.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">3612</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.7.6.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">0.425</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.7.6.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">0.0017</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.7.6.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">0.00045</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.7.6.8\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.7.6.8.1\">\\bigstrut</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.8.7\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" colspan=\"9\" id=\"S4.T2.1.1.8.7.1\" style=\"padding-bottom:-13.00005pt;padding-left:2.0pt;padding-right:2.0pt;\">\u2005 <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.8.7.1.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.9.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.9.8.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13975v1#bib.bib25\" title=\"\">25</a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.9.8.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Int8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.9.8.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">ZCU102</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.9.8.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">1024</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.9.8.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">1.47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.9.8.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">279</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.9.8.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">272</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.9.8.8\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">HDL</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.9.8.9\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.1.1.9.8.9.1\">0%</span> <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.9.8.9.2\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.10.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.10.9.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.10.9.1.1\">ProTEA</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.10.9.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Fix8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.10.9.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Alveo U55C</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.10.9.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">3612</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.10.9.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">5.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.10.9.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">83</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.10.9.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.10.9.8\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">HLS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.10.9.9\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.10.9.9.1\">\\bigstrut</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.11.10\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" colspan=\"9\" id=\"S4.T2.1.1.11.10.1\" style=\"padding-bottom:-13.00005pt;padding-left:2.0pt;padding-right:2.0pt;\">\u2005 <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.11.10.1.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.12.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.12.11.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13975v1#bib.bib28\" title=\"\">28</a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.12.11.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">\u2013</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.12.11.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Alveo 200</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.12.11.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">4145</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.12.11.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">15.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.12.11.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">75.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.12.11.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.12.11.8\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.1.1.12.11.8.1\">HLS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.12.11.9\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.1.1.12.11.9.1\">0%</span> <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.12.11.9.2\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.13.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.13.12.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.13.12.1.1\">ProTEA</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.13.12.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Fix8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.13.12.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Alveo U55C</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.13.12.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">3612</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.13.12.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">9.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.13.12.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">132</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.13.12.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.13.12.8\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.13.12.8.1\">\\bigstrut</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.14.13\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" colspan=\"9\" id=\"S4.T2.1.1.14.13.1\" style=\"padding-bottom:-13.00005pt;padding-left:2.0pt;padding-right:2.0pt;\">\u2005 <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.14.13.1.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.15.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.15.14.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13975v1#bib.bib29\" title=\"\">29</a>]</cite></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.15.14.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Fix16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.15.14.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">VCU118</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.15.14.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">5647</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.15.14.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">2.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.15.14.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.15.14.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.15.14.8\" rowspan=\"2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.1.1.15.14.8.1\">HLS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.15.14.9\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">93% <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.15.14.9.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.16.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.16.15.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T2.1.1.16.15.1.1\">ProTEA</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.16.15.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Fix8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.16.15.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">Alveo U55C</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.16.15.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">3612</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.16.15.5\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">4.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.16.15.6\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.16.15.7\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.16.15.8\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">0% <span class=\"ltx_ERROR undefined\" id=\"S4.T2.1.1.16.15.8.1\">\\bigstrut</span>\n</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 116 |
+
"capture": "TABLE II: Comparison with FPGA Accelerators."
|
| 117 |
+
},
|
| 118 |
+
"3": {
|
| 119 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Cross-Platform Comparison</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T3.6\" style=\"width:433.6pt;height:56.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-17.3pt,97.5pt) scale(0.926014861091514,0.22581548539419) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T3.6.6\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.7.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.7.1.1\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.7.1.1.1\">TNNs</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.7.1.2\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.7.1.2.1\">Works</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.7.1.3\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.7.1.3.1\">Platform</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.7.1.4\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.7.1.4.1\">Frequency</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.7.1.5\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.7.1.5.1\">Latency (ms)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.7.1.6\" style=\"padding:-0.5pt 2.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.6.6.7.1.6.1\">Speed Up</span> <span class=\"ltx_ERROR undefined\" id=\"S5.T3.6.6.7.1.6.2\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.8.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.8.2.1\" rowspan=\"3\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.6.6.8.2.1.1\">#1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.8.2.2\" rowspan=\"3\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.6.6.8.2.2.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13975v1#bib.bib21\" title=\"\">21</a>]</cite></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.8.2.3\" style=\"padding:-0.5pt 2.0pt;\">INTEL I5-5257U CPU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.8.2.4\" style=\"padding:-0.5pt 2.0pt;\">2.7 GHz</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.8.2.5\" style=\"padding:-0.5pt 2.0pt;\">3.54 (Base)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.8.2.6\" style=\"padding:-0.5pt 2.0pt;\">1 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.6.6.8.2.6.1\">\\bigstrut</span>[t]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.1.1.2\" style=\"padding:-0.5pt 2.0pt;\">JETSON TX2 GPU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.1.1.3\" style=\"padding:-0.5pt 2.0pt;\">1.3 GHz</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.1.1.4\" style=\"padding:-0.5pt 2.0pt;\">0.673</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.1.1.1.1\" style=\"padding:-0.5pt 2.0pt;\">5.3\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.2.2.2.2\" style=\"padding:-0.5pt 2.0pt;\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.2.2.2.2.1\">ProTEA</span> FPGA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.2.2.2.3\" style=\"padding:-0.5pt 2.0pt;\">0.2 GHz</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.2.2.2.4\" style=\"padding:-0.5pt 2.0pt;\">4.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.2.2.2.1\" style=\"padding:-0.5pt 2.0pt;\">0.79 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.2.2.2.1.1\">\\bigstrut</span>[b]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.9.3\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" colspan=\"6\" id=\"S5.T3.6.6.9.3.1\" style=\"padding-bottom:-13.00005pt;padding:-0.5pt 2.0pt;\">\u2005 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.6.6.9.3.1.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.10.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.10.4.1\" rowspan=\"2\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.6.6.10.4.1.1\">#2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.10.4.2\" rowspan=\"2\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.6.6.10.4.2.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13975v1#bib.bib23\" title=\"\">23</a>]</cite></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.10.4.3\" style=\"padding:-0.5pt 2.0pt;\">NVIDIA TITAN XP GPU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.10.4.4\" style=\"padding:-0.5pt 2.0pt;\">1.4 GHz</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.10.4.5\" style=\"padding:-0.5pt 2.0pt;\">1.062 (Base)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.10.4.6\" style=\"padding:-0.5pt 2.0pt;\">1 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.6.6.10.4.6.1\">\\bigstrut</span>[t]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.3.3.3.2\" style=\"padding:-0.5pt 2.0pt;\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.3.3.3.2.1\">ProTEA</span> FPGA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.3.3.3.3\" style=\"padding:-0.5pt 2.0pt;\">0.2 GHz</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.3.3.3.4\" style=\"padding:-0.5pt 2.0pt;\">0.425</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.3.3.3.1\" style=\"padding:-0.5pt 2.0pt;\">2.5 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.3.3.3.1.1\">\\bigstrut</span>[b]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.11.5\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" colspan=\"6\" id=\"S5.T3.6.6.11.5.1\" style=\"padding-bottom:-13.00005pt;padding:-0.5pt 2.0pt;\">\u2005 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.6.6.11.5.1.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.12.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.12.6.1\" rowspan=\"3\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.6.6.12.6.1.1\">#3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.12.6.2\" rowspan=\"3\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.6.6.12.6.2.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13975v1#bib.bib25\" title=\"\">25</a>]</cite></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.12.6.3\" style=\"padding:-0.5pt 2.0pt;\">INTEL I5-4460 CPU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.12.6.4\" style=\"padding:-0.5pt 2.0pt;\">3.2 GHz</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.12.6.5\" style=\"padding:-0.5pt 2.0pt;\">4.66 (Base)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.12.6.6\" style=\"padding:-0.5pt 2.0pt;\">1 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.6.6.12.6.6.1\">\\bigstrut</span>[t]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.4.4.4.2\" style=\"padding:-0.5pt 2.0pt;\">NVIDIA RTX 3060 GPU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.4.4.4.3\" style=\"padding:-0.5pt 2.0pt;\">1.3 GHz</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.4.4.4.4\" style=\"padding:-0.5pt 2.0pt;\">0.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.4.4.4.1\" style=\"padding:-0.5pt 2.0pt;\">6.5\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.5.5.2\" style=\"padding:-0.5pt 2.0pt;\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.5.5.5.2.1\">ProTEA</span> FPGA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.5.5.3\" style=\"padding:-0.5pt 2.0pt;\">0.2 GHz</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.5.5.4\" style=\"padding:-0.5pt 2.0pt;\">5.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.5.5.1\" style=\"padding:-0.5pt 2.0pt;\">0.89 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.5.5.5.1.1\">\\bigstrut</span>[b]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.13.7\">\n<td class=\"ltx_td ltx_nopad_r ltx_align_right ltx_border_t\" colspan=\"6\" id=\"S5.T3.6.6.13.7.1\" style=\"padding-bottom:-13.00005pt;padding:-0.5pt 2.0pt;\">\u2005 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.6.6.13.7.1.1\">\\bigstrut</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.14.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.14.8.1\" rowspan=\"2\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.6.6.14.8.1.1\">#4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.14.8.2\" rowspan=\"2\" style=\"padding:-0.5pt 2.0pt;\"><span class=\"ltx_text\" id=\"S5.T3.6.6.14.8.2.1\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13975v1#bib.bib28\" title=\"\">28</a>]</cite></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.14.8.3\" style=\"padding:-0.5pt 2.0pt;\">NVIDIA TITAN XP GPU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.14.8.4\" style=\"padding:-0.5pt 2.0pt;\">1.4 GHz</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.14.8.5\" style=\"padding:-0.5pt 2.0pt;\">147 (Base)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.14.8.6\" style=\"padding:-0.5pt 2.0pt;\">1 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.6.6.14.8.6.1\">\\bigstrut</span>[t]</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.6.6.6.2\" style=\"padding:-0.5pt 2.0pt;\">\n<span class=\"ltx_text ltx_font_italic\" id=\"S5.T3.6.6.6.2.1\">ProTEA</span> FPGA</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.6.6.6.3\" style=\"padding:-0.5pt 2.0pt;\">0.2 GHz</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.6.6.6.4\" style=\"padding:-0.5pt 2.0pt;\">9.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T3.6.6.6.1\" style=\"padding:-0.5pt 2.0pt;\">16 <span class=\"ltx_ERROR undefined\" id=\"S5.T3.6.6.6.1.1\">\\bigstrut</span>[b]</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 120 |
+
"capture": "TABLE III: Cross-Platform Comparison"
|
| 121 |
+
}
|
| 122 |
+
},
|
| 123 |
+
"image_paths": {
|
| 124 |
+
"1": {
|
| 125 |
+
"figure_path": "2409.13975v1_figure_1.png",
|
| 126 |
+
"caption": "Figure 1: Transformer Architecture.",
|
| 127 |
+
"url": "http://arxiv.org/html/2409.13975v1/x1.png"
|
| 128 |
+
},
|
| 129 |
+
"2": {
|
| 130 |
+
"figure_path": "2409.13975v1_figure_2.png",
|
| 131 |
+
"caption": "Figure 2: Multihead Attention Layer.",
|
| 132 |
+
"url": "http://arxiv.org/html/2409.13975v1/x2.png"
|
| 133 |
+
},
|
| 134 |
+
"3": {
|
| 135 |
+
"figure_path": "2409.13975v1_figure_3.png",
|
| 136 |
+
"caption": "Figure 3: Computations of the Attention Module",
|
| 137 |
+
"url": "http://arxiv.org/html/2409.13975v1/x3.png"
|
| 138 |
+
},
|
| 139 |
+
"4": {
|
| 140 |
+
"figure_path": "2409.13975v1_figure_4.png",
|
| 141 |
+
"caption": "Figure 4: Computations of Feedforward Network.",
|
| 142 |
+
"url": "http://arxiv.org/html/2409.13975v1/x4.png"
|
| 143 |
+
},
|
| 144 |
+
"5": {
|
| 145 |
+
"figure_path": "2409.13975v1_figure_5.png",
|
| 146 |
+
"caption": "Figure 5: Tiling Technique in Multihead Attention Layer.",
|
| 147 |
+
"url": "http://arxiv.org/html/2409.13975v1/x5.png"
|
| 148 |
+
},
|
| 149 |
+
"6": {
|
| 150 |
+
"figure_path": "2409.13975v1_figure_6.png",
|
| 151 |
+
"caption": "Figure 6: Tiling Technique in FFN.",
|
| 152 |
+
"url": "http://arxiv.org/html/2409.13975v1/x6.png"
|
| 153 |
+
},
|
| 154 |
+
"7": {
|
| 155 |
+
"figure_path": "2409.13975v1_figure_7.png",
|
| 156 |
+
"caption": "Figure 7: Choosing the optimum tile size.",
|
| 157 |
+
"url": "http://arxiv.org/html/2409.13975v1/x7.png"
|
| 158 |
+
}
|
| 159 |
+
},
|
| 160 |
+
"validation": true,
|
| 161 |
+
"references": [],
|
| 162 |
+
"url": "http://arxiv.org/html/2409.13975v1"
|
| 163 |
+
}
|
20240921/2409.13980v1.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240921/2409.13982v1.json
ADDED
|
@@ -0,0 +1,300 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "CUS3D: CLIP-based Unsupervised 3D Segmentation via Object-level Denoise",
|
| 3 |
+
"abstract": "To ease the difficulty of acquiring annotation labels in 3D data, a common method is using unsupervised and open-vocabulary semantic segmentation, which leverage 2D CLIP semantic knowledge.\nIn this paper, unlike previous research that ignores the \u201cnoise\u201d raised during feature projection from 2D to 3D, we propose a novel distillation learning framework named CUS3D.\nIn our approach, an object-level denosing projection module is designed to screen out the \u201cnoise\u201d and ensure more accurate 3D feature.\nBased on the obtained features, a multimodal distillation learning module is designed to align the 3D feature with CLIP semantic feature space with object-centered constrains to achieve advanced unsupervised semantic segmentation.\nWe conduct comprehensive experiments in both unsupervised and open-vocabulary segmentation, and the results consistently showcase the superiority of our model in achieving advanced unsupervised segmentation results and its effectiveness in open-vocabulary segmentation.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "###figure_1### Point cloud semantic segmentation [1 ###reference_b1###, 2 ###reference_b2###] aiming to give a category label for each input point is a fundamental task of 3D visual understanding.\nDue to the challenges in acquiring and annotating point cloud data, there is a shortage of supervised segmentation labels for point clouds, which greatly limit the development of this field and downstream tasks.\nTo address this shortcoming, many researchers have carried out studies on unsupervised semantic segmentation which means training model without using annotated data.\nSeveral research works have used clustering [3 ###reference_b3###, 4 ###reference_b4###] to achieve unsupervised semantic segmentation.\nThis approach, although applicable to a wide range of scenarios, has limited accuracy and is hard to capture deep semantic context for downstream tasks.\nBenefiting from the development of large models such as CLIP [5 ###reference_b5###] which established complete semantic space on 2D image or even biomedical image [6 ###reference_b6###], many researchers [7 ###reference_b7###, 8 ###reference_b8###] aimed to achieve unsupervised point cloud semantic segmentation by aligning the 3D semantic space to the 2D semantic space of large models.\nThe framework of their approaches takes the RGB-D frames as input, and implement 2D semantic segmentation via large model like CLIP.\nPixel-wise features can be obtained through segmented masks, then the features is projected to 3D space using 2D-3D projection methods [9 ###reference_b9###] to obtain point-wise feature.\nThe semantic segmentation result can be obtained by calculating the the similarity between the point-wise features and the text embedding of categories.\nThese approaches leveraging knowledge of large models can achieve more accurate unsupervised semantic segmentation result and have impressive performance on open-vocabulary tasks.\nHowever, though their methods yielded promising results in unsupervised and open-vocabulary semantic segmentation, it is noteworthy that they did not consider the accuracy of the obtained 3D features.\nIn fact, these 3D features contain a lot of \u201cnoise\u201d, which affects the alignment to the semantic space of large model.\nThe \u201cnoise\u201d raised from two aspects.\n1) \u201cNoise\u201d is introduced by the mask-centered assignment of features in 2D stage.\n2) Fusion of features without screening introduces \u201cnoise\u201d in 3D stage.\nIn detail, the whole image is divided into patches [8 ###reference_b8###] or masks [7 ###reference_b7###] for feature exaction in 2D stage.\nAs shown in Figure 1 ###reference_### (Yellow), one pixel of the \u201cbag\u201d belongs to two different masks.\nThe big mask is selected as the final results and the CLIP exact its features as \u201cBed\u201d due to the large models pay more attention on the overall semantics.\nMasks are difficult to adaptively fit to the size of various objects introducing \u201cnoise\u201d.\nIn 3D stage, as shown in Figure 1 ###reference_### (Blue), the red point represent the same point in 3D space and appears in three different frames.\nThe features projected from the three frames is different, when fusing the feature without screening, the \u201cnoise\u201d is raised.\nShown in the figure, the point is incorrectly recognized as \u201cBad\u201d finally.\nWhen aligning the semantic space, these \u201cnoises\u201d affect the accuracy of the learned distributions, as illustrated in Figure 1 ###reference_### (Green), and further reduce segmentation accuracy.\nWe visualize the impact of \u201cnoise\u201d on the segmentation results in Figure 3 ###reference_### (D).\nHow to avoid these \u201cnoises\u201d is a worthy concern and can significantly improve the accuracy of segmentation.\nTo address these issues and align the semantic space more accurately, we propose a novel framework CUS3D for unsupervised and open-vocabulary semantic segmentation via object-level denoise.\nWe believe that pixels or points inside one object should have the same characteristics, the \u201cnoise\u201d can be suppressed by filtering and assigning features at object-level, but it is difficult to find an accurate object mask without supervision.\nTo eliminate these noises during projection, we firstly propose a Object-level Denoising Projection(ODP) module, obtaining category to pixels as well as point mask by efficiently clustering and voting strategies, to screen out the \u201cnoise\u201d raised in 2D stage and 3D stage and obtain more accurate features.\nThe obtained 3D features are discrete in semantic space, and distillation learning enables the model to learn the complete distribution from discrete sample points and further extends the use of the model.\nThus, we further design a 3D Multimodal Distillation Learning (MDL) module with object-level constraints, constraining the 2D and 3D semantic spaces to be as close as possible centered on the object and further screening out the effects of \u201cnoise\u201d.\nIn summary, we propose a novel framework called CUS3D to efficiently align the 3D semantic space to the CLIP feature space to achieve advanced unsupervised and open-vocabulary segmentation results, and our contributions in this paper are as follows:\n1) We proposed an object-level feature denoise module to ensure more accurate 3D features.\n2) We devised a multimodal distillation learning module using object-centered constrains to realize more effectively alignment of the 2D and 3D semantic space.\n3) We conducted detailed unsupervised and open-vocabulary segmentation experiments to prove the efficiency of our approach.\n###figure_2###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Method",
|
| 15 |
+
"text": "The overall framework of our proposed method is illustrated in Figure 2 ###reference_###, which consists of four main stages:\n1) 2D CLIP feature extraction (orange), extracting pixel-wise features;\n2) 3D feature aggregation (blue), obtaining 3D features;\n3) 3D student network (green), fitting the semantic space of CLIP;\n4) CLIP textual encoder (gray), extracting textual embeddings.\nThe first and the second stage belongs to the ODP module which denoising at object level to obtain accurate 3D features, while the third and forth stage belongs to the MDL module which fitting the CLIP\u2019s semantic space."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Object-level Denoising Projection",
|
| 21 |
+
"text": "This module containing two sub-modules are designed to filter the \u201cnoise\u201d in the feature at object-level to obtain more accurate 3D features, the two sub-models are described in detail in the following sections."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.1.1",
|
| 25 |
+
"parent_section_id": "2.1",
|
| 26 |
+
"section_name": "2.1.1 2D CLIP Feature Extraction",
|
| 27 |
+
"text": "We design a 2D feature extraction method to assign pixel-wise features.\nUnlike other mask-based methods [10 ###reference_b10###], we assign pixel features centered on the object, and design a 2D object-level filter screening out the \u201cnoise\u201d raised during feature excation.\nWe first use a MaskFormer [11 ###reference_b11###] to obtain candidate masks and a pixel-to-mask mapping matrix (), representing the probability that each pixel belongs to each mask,\nwhere is the number of pixels in the image and is the number of masks.\nThen, for each predicted mask, the corresponding CLIP visual features are extracted using the CLIP ViT encoder [5 ###reference_b5###].\nIf a probability value in is greater than the threshold , we assign the corresponding mask feature to this pixel.\nEach raw pixel feature is .\nFor each raw feature of each pixel, we calculate its category label and obtain a label set of this pixel.\nThe most frequent category label of the set is voted as the label of this pixel, to establish the connection between pixel and object.\nThen we filter out the CLIP feature that does not belong to this category.\nFinally, the reserved CLIP features undergo an average pooling to obtain the final pixel-wise feature ."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.1.2",
|
| 31 |
+
"parent_section_id": "2.1",
|
| 32 |
+
"section_name": "2.1.2 3D Feature Aggregation",
|
| 33 |
+
"text": "As shown in the blue part of Figure 2 ###reference_###, the pixel-wise features obtained from Section 2.1.1 ###reference_.SSS1### are projected to the 3D point clouds using method in 3DMV [9 ###reference_b9###] to obtain raw point features .\nTo reduce the \u201cnoise\u201d in the features, we conduct denoise by a 3D object-level filter.\nFirst, we calculate object masks for point clouds, a pretrained Pointclustering [3 ###reference_b3###] is used to obtain the offset of each point cloud to its cluster center, and construct the object mask according to which cluster the point cloud belongs to.\nThen, as shown in Figure 2 ###reference_### (3D Feature aggregation), we use the generated object mask to denoise and aggregate the raw point feature.\nFor each object mask, since the raw point feature is projected from the pixel-wise feature, it also has the CLIP semantic information;\nthus, we can measure the cosine similarity between its each raw point-wise feature and the CLIP text embedding, to assign a category label to each raw point-wise feature.\nAnd the most frequent category label among the raw point-wise features is selected as the label for that object mask.\nWe then screen out any raw point-wise features that do not match this label and each point cloud remains features.\nTo get the final point-wise feature , for each point,\nif , we perform average pooling on these features to obtain the final feature for that point;\nwhile if ,\nwe take the average of all retained features belonging to the same object mask with it and assign this average feature to that point.\nThis screening process can effectively eliminate noisy features at object level and obtain more robust and accurate point-wise features."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.2",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "3D Multimodal Distillation Learning",
|
| 39 |
+
"text": "The distribution of 3D features obtained after projection is discrete, in order to align 2D CLIP semantic space more effectively and further eliminate the effects of \u201cnoise\u201d, we adapt a distillation learning network with object-centered constrains to align to the 2D CLIP semantic space.\nUsing the features (Teacher) obtained in Section 2.1.2 ###reference_.SSS2###, distillation learning can be performed to guide the 3D model to encode the features to fit the CLIP embedding space, thus further suppressing \u201cnoise\u201d.\nWe supervise the student network (green stage in Figure 2 ###reference_###) with the constrains of features and the labels.\n3D ResU-Net [12 ###reference_b12###] is chosen as the backbone network because it can efficiently extract point features and preserve shallow features.\nFor a scene with points, the 3D ResU-Net outputs a feature map of size , where each of the points has a corresponding 3D feature vector.\nAfter the backbone network is deployed, we utilize three fully connected layers (MLPs) as projection modules to project the point cloud features into the CLIP embedding space.\nSpecifically, the output feature map is fed into the projection layers to align the features with the CLIP semantic space."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.2.1",
|
| 43 |
+
"parent_section_id": "2.2",
|
| 44 |
+
"section_name": "2.2.1 Loss Function",
|
| 45 |
+
"text": "Relied solely on cosine similarity for supervision can lead to the students network more sensitive to \u201cnoise\u201d.\nUnlike previous work, we add object-centered constrain(label loss) for more effectually learning.\nThe feature loss function maintains semantic consistency between the output features and the target features.\nThe label loss function provides a soft-supervise signal for training, aligning the two distributions with object-centered constrains which can weak the effect of \u201cnoise\u201d and learn a rubost distribution.\nUsing both loss functions together, the model achieves better knowledge distillation performance.\nFeature Loss is cosine similarity loss to measure the similarity between target and output features, which is defined as follows:\nwhere and represent the features predicted by the network and the aggregated CLIP feature of the - point, respectively, where denotes the total number of point clouds.\nThis loss function constrains the semantic proximity between two features.\nLabel Loss employs the cross-entropy loss to assess the consistency between the predict results of two features (network output feature and CLIP feature) and is defined as follows:\nwhere represents the number of point clouds, and denotes the cross-entropy loss function.\n and are one-hot matrixes which represent the category prediction results of the two features corresponding to the - point.\nThe matrixes are obtained by cosine similarity between the textual embedding and the feature.\nLabel loss constrains the alignment of the two semantic spaces centered on the object category, enabling the network to further resist the effects of \u201cnoise\u201d and learn a valid semantic space.\nThe combined of the two constrains can enable the network to more accurately align CLIP semantic space."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Experiment",
|
| 51 |
+
"text": ""
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.1",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Dataset and Settings",
|
| 57 |
+
"text": "Dataset:\nScanNetV2 [13 ###reference_b13###] and S3DIS [14 ###reference_b14###] are both 3D indoor dataset. ScanNetV2 provide point cloud and RGB-D data of more than 1,500 scenes with objects in 20 categories.\nS3DIS provides point cloude and RGB-D data of 6 large areas, 272 rooms, and its object labeled in 13 classes.\nImplement Details:\nOur experiments were carried out using a GeForce RTX 3090 graphics card with 24GB RAM.\nOur network was distilled using CLIP features in the training set (without labels).\nWe employ an initial learning rate of 0.001 with cosine annealing learning decay.\nWe adapt the accuracy (Acc), mean IoU (mIoU) and harmonic IoU (hIoU) for the evaluation metrics."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Quantitative results",
|
| 63 |
+
"text": "###figure_3### Unsupervised Semantic Segmentation:\nWe conducted unsupervised segmentation experiments on both ScanNetV2 and S3DIS.\nIn both the feature projection process and the distillation study process, no labels were used.\nThe model was subsequently tested on ScannetV2\u2019s validation set and S3DIS.\nThe experimental results are detailed in Table 1 ###reference_###.\nWhile our method may not surpass existing supervised methods, it has attained state-of-the-art (SOTA) performance among unsupervised approaches on ScanNetV2 (57.4% vs. 54.2%).\nNotably, we show a huge improvement in accuracy over the other methods on ScanNetV2 (75.9% vs. 70.7%), which we attribute to pseudo label produced by the ODP module to guide the network to learn the boundaries of the object and improve the accuracy of semantic segmentation.\nOpen-vocabulary Semantic Segmentation:\nWe conducted open-vocabulary experiments in two ways on ScanNetV2 dataset.\nIn the first experiment, following the settings of CLIP-FO3D [19 ###reference_b19###], we split the labels into visible part and invisible part. We only train the model using visible part and then test the model on both visible and invisible labels, to see if our model can segment invisible objects.\nThe invisible labels are set to 6 and 10, respectively.\nThe results of these experiments are presented in Table 2 ###reference_###,\nand our method achieves SOTA in both mIoU and hIoU on Unseen-6 and Unseen-10,\nproving its robust open-vocabulary capability.\nIn the second experiment, as shown in Table 1 ###reference_### (w/o pretraining), we distillate our model on ScanNetV2 training set with 20 labels, and test our model on S3DIS with unseen labels and scenes. It is noticed that our model performs better than CLIP-FO3D (25.6% vs. 22.3%) in same settings.\nWe believe that our two modules weak the effect of the \u201cnoise\u201d in feature and achieved more effective and robust alignment from 3D feature to 2D CLIP semantic space, thus performing better on onpen-vocabulary semantic segmentation."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.3",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "Ablation Studies",
|
| 69 |
+
"text": "In this section, we conduct ablation experiments on the S3DIS and ScanNetV2 datasets, projecting CLIP features for unsupvised segmentation networks distillation, and making segmentation predictions.\nWe first determine whether our loss function improves the model performance on two datasets.\nAs shown in Table 3 ###reference_###, the network encounters performance drop when only using label loss for distillation study,\nwhile without label loss and only retain feature loss, the performance of the model also drops.\nUsing both label loss and feature loss can achieve more accurate segmentation result on both S3DIS dataset (52.6% vs. 49.8%) and ScanNetV2 dataset (57.4% vs. 53.3%).\nThe label loss force the two distributions close centered on the object-level, with little attention to detail, and is therefore poorly effective on its own.\nThe feature loss can bring the two distributions as close as possible, but is susceptible to \u201cnoise\u201d.\nWhen the two loss functions are used together, the network is able to effectively eliminate \u201cnoise\u201d interference and learn a more robust distribution.\nThen to investigate the effectiveness of ODP module, we conduct ablation experiments on the sub-modules in ODP on ScanNetV2 dataset both on training set and validation set.\nAs indicated in Table 4 ###reference_###,\nthe four rows above show the mIoU and accuracy calculating bwteen ground truth and pseudo CLIP feature, while the following four rows show the mIoU and accuracy calculate between ground truth and student network\u2019s outputs.\nIt can be found that using student network achieves better semantic segmentation results than directly leveraging pseudo CLIP features (57.4% vs. 52.7%) due to that the student network can complement discrete distributions and further resist the effects of \u201cnoise\u201d.\nThen we investigate the efficiency of 2D feature extraction and 3D feature aggregation.\nUsing either 2D feature extraction or 3D feature aggregation module can eliminate the effects of \u201cnoise\u201d and improve the performance of the models, while combining both of them can achieve even greater improvement on training and validation set, regardless of whether the student network is used.\nAnd compared with 2D feature extraction, 3D feature aggregation has more impact on model performance."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.4",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "Segmentation Visualization",
|
| 75 |
+
"text": "Figure 3 ###reference_### gives the visualization results of some experiments.\nSubfigure A shows the unsupervised semantic segmentation results on ScannetV2 comparing our method to OpenScene [7 ###reference_b7###].\nIt can be seen that our model perform better in detail of different objects.\nSubfigure B demonstrates our model\u2019s abilities in open-vocabulary segmentation.\nCompared to ground truth, our model can segment objects in unseen categories, such as computers, blackboards, etc.\nIn Subfigure C,\nusing the text below the images, our model can correctly focus on corresponding objects, and the attention results are visualized by heat maps.\nThis proves that our model is capable of exploring open-vocabulary 3D scenes by not only categories but also other object properties, such as colors, shapes, usages, materials, and so on.\nSubfigure D shows the visualization results of the ablation experiments performed on the ODP module.\nThis figure clearly visualizes the effect of \u201cnoise\u201d on the segmentation results, and Both the two sub-modules in ODP can weak the effect of the \u201cnoise\u201d, thus improving the accuracy of the 3D features, while using them both can achieve significantly better results."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Conclution",
|
| 81 |
+
"text": "This paper proposes CUS3D aiming to align 3D features and 2D CLIP semantic space effectually and realise advanced open-vocabulary semantic segmentation.\nOur experimental results consistently demonstrate that our approach outperforms in achieving superior unsupervised segmentation results and exhibits robust open world segmentation capabilities.\nSmall objects pose a challenge for our segmentation process, often resulting in inaccurate segmentation.\nThis occurs because the 2D feature extraction process overlooks the unique characteristics of small objects.\nWe plan to address this limitation in future work."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [],
|
| 85 |
+
"tables": {
|
| 86 |
+
"1": {
|
| 87 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T1.2.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.1.1\">Model</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T1.2.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.2.1\">ScanNet</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T1.2.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1.3.1\">S3DIS</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.2.1.1\">mIoU</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.2.2.1\">Acc</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.2.3.1\">mIoU</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.2.4.1\">Acc</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" colspan=\"5\" id=\"S3.T1.2.3.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.3.3.1.1\">Fully-supervised methods</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.2.4.4.1\">MinkowskiNet\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib15\" title=\"\">15</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.4.2\">69.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.4.3\">77.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.4.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.4.5\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.5.5.1\">Mix3D\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib16\" title=\"\">16</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.5.2\">73.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.5.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.5.4\">67.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.5.5\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.6.6.1\">Stratified Transformer\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib17\" title=\"\">17</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.6.2\">74.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.6.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.6.4\">72.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.6.5\">78.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" colspan=\"5\" id=\"S3.T1.2.7.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.7.7.1.1\">Unsupervised methods</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.2.8.8.1\">Mseg\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib18\" title=\"\">18</a>]</cite> Voting</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.8.8.2\">45.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.8.8.3\">54.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.8.8.4\">42.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.8.8.5\">51.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.9.9.1\">CLIP-FO3D\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib19\" title=\"\">19</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.9.2\">30.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.9.3\">49.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.9.4\">22.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.9.5\">32.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.10.10.1\">OpenScene\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib7\" title=\"\">7</a>]</cite> (LSeg)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.10.10.2\">54.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.10.10.3\">66.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.10.10.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.10.10.5\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.11.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.2.11.11.1\">OpenScene\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib7\" title=\"\">7</a>]</cite> (OpenSeg)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.11.11.2\">47.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.11.11.3\">70.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.11.11.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.11.11.5\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.2.12.12.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.12.12.1.1\">CUS3D (Ours)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.12.12.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.12.12.2.1\">57.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.12.12.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.12.12.3.1\">75.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.12.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.12.12.4.1\">52.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.12.12.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.12.12.5.1\">72.6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.13.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T1.2.13.13.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S3.T1.2.13.13.1.1\">w/o pretraining</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.13.13.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.13.13.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.13.13.4\">25.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.13.13.5\">50.6</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.1.1\">Table 1</span>: </span>segmentation results on ScanNetV2\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib13\" title=\"\">13</a>]</cite> (Validation Set) and S3DIS\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib20\" title=\"\">20</a>]</cite>, our method achieves SOTA unsupervised segmentation performance.\n</figcaption>\n</figure>",
|
| 88 |
+
"capture": "Table 1: segmentation results on ScanNetV2\u00a0[13] (Validation Set) and S3DIS\u00a0[20], our method achieves SOTA unsupervised segmentation performance.\n"
|
| 89 |
+
},
|
| 90 |
+
"2": {
|
| 91 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.4.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T2.4.5.1.1\"><span class=\"ltx_text\" id=\"S3.T2.4.5.1.1.1\" style=\"font-size:90%;\">Setting</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S3.T2.4.5.1.2\"><span class=\"ltx_text\" id=\"S3.T2.4.5.1.2.1\" style=\"font-size:90%;\">Unseen-6</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"3\" id=\"S3.T2.4.5.1.3\"><span class=\"ltx_text\" id=\"S3.T2.4.5.1.3.1\" style=\"font-size:90%;\">Unseen-10</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.6.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T2.4.6.2.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.4.6.2.1.1\" style=\"font-size:90%;\">Metric</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S3.T2.4.6.2.2\"><span class=\"ltx_text\" id=\"S3.T2.4.6.2.2.1\" style=\"font-size:90%;\">mIoU</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.4.6.2.3\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.4.6.2.3.1\" style=\"font-size:90%;\">hIoU</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S3.T2.4.6.2.4\"><span class=\"ltx_text\" id=\"S3.T2.4.6.2.4.1\" style=\"font-size:90%;\">mIoU</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.4.6.2.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T2.4.6.2.5.1\" style=\"font-size:90%;\">hIoU</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T2.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T2.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T2.4.4.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.4.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T2.4.7.1.1\">\n<span class=\"ltx_text\" id=\"S3.T2.4.7.1.1.1\" style=\"font-size:90%;\">3DGenZ\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.4.7.1.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib21\" title=\"\">21</a><span class=\"ltx_text\" id=\"S3.T2.4.7.1.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.7.1.2\"><span class=\"ltx_text\" id=\"S3.T2.4.7.1.2.1\" style=\"font-size:90%;\">31.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.7.1.3\"><span class=\"ltx_text\" id=\"S3.T2.4.7.1.3.1\" style=\"font-size:90%;\">4.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T2.4.7.1.4\"><span class=\"ltx_text\" id=\"S3.T2.4.7.1.4.1\" style=\"font-size:90%;\">8.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.7.1.5\"><span class=\"ltx_text\" id=\"S3.T2.4.7.1.5.1\" style=\"font-size:90%;\">30.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.7.1.6\"><span class=\"ltx_text\" id=\"S3.T2.4.7.1.6.1\" style=\"font-size:90%;\">1.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.4.7.1.7\"><span class=\"ltx_text\" id=\"S3.T2.4.7.1.7.1\" style=\"font-size:90%;\">2.7</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.8.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.4.8.2.1\">\n<span class=\"ltx_text\" id=\"S3.T2.4.8.2.1.1\" style=\"font-size:90%;\">TGP\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.4.8.2.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib22\" title=\"\">22</a><span class=\"ltx_text\" id=\"S3.T2.4.8.2.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.8.2.2\"><span class=\"ltx_text\" id=\"S3.T2.4.8.2.2.1\" style=\"font-size:90%;\">55.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.8.2.3\"><span class=\"ltx_text\" id=\"S3.T2.4.8.2.3.1\" style=\"font-size:90%;\">15.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.4.8.2.4\"><span class=\"ltx_text\" id=\"S3.T2.4.8.2.4.1\" style=\"font-size:90%;\">24.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.8.2.5\"><span class=\"ltx_text\" id=\"S3.T2.4.8.2.5.1\" style=\"font-size:90%;\">52.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.8.2.6\"><span class=\"ltx_text\" id=\"S3.T2.4.8.2.6.1\" style=\"font-size:90%;\">9.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.8.2.7\"><span class=\"ltx_text\" id=\"S3.T2.4.8.2.7.1\" style=\"font-size:90%;\">16.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.9.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T2.4.9.3.1\">\n<span class=\"ltx_text\" id=\"S3.T2.4.9.3.1.1\" style=\"font-size:90%;\">CLIP-FO3D\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S3.T2.4.9.3.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2409.13982v1#bib.bib19\" title=\"\">19</a><span class=\"ltx_text\" id=\"S3.T2.4.9.3.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.9.3.2\"><span class=\"ltx_text\" id=\"S3.T2.4.9.3.2.1\" style=\"font-size:90%;\">67.3</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.9.3.3\"><span class=\"ltx_text\" id=\"S3.T2.4.9.3.3.1\" style=\"font-size:90%;\">50.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T2.4.9.3.4\"><span class=\"ltx_text\" id=\"S3.T2.4.9.3.4.1\" style=\"font-size:90%;\">57.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.9.3.5\"><span class=\"ltx_text\" id=\"S3.T2.4.9.3.5.1\" style=\"font-size:90%;\">67.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.9.3.6\"><span class=\"ltx_text\" id=\"S3.T2.4.9.3.6.1\" style=\"font-size:90%;\">40.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.4.9.3.7\"><span class=\"ltx_text\" id=\"S3.T2.4.9.3.7.1\" style=\"font-size:90%;\">50.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.10.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S3.T2.4.10.4.1\"><span class=\"ltx_text\" id=\"S3.T2.4.10.4.1.1\" style=\"font-size:90%;\">CUS3D (Ours)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.4.10.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.4.10.4.2.1\" style=\"font-size:90%;\">68.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.4.10.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.4.10.4.3.1\" style=\"font-size:90%;\">53.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T2.4.10.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.4.10.4.4.1\" style=\"font-size:90%;\">59.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.4.10.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.4.10.4.5.1\" style=\"font-size:90%;\">69.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.4.10.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.4.10.4.6.1\" style=\"font-size:90%;\">46.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.4.10.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.4.10.4.7.1\" style=\"font-size:90%;\">55.5</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.14.1.1\">Table 2</span>: </span>Open-vocabulary semantic segmentation on ScanNetV2.\n\u201cUnseen-i\u201d indicates that there are i classes that do not have labels during training.\n and represent the performance of seen and unseen classes, respectively.\n</figcaption>\n</figure>",
|
| 92 |
+
"capture": "Table 2: Open-vocabulary semantic segmentation on ScanNetV2.\n\u201cUnseen-i\u201d indicates that there are i classes that do not have labels during training.\n and represent the performance of seen and unseen classes, respectively.\n"
|
| 93 |
+
},
|
| 94 |
+
"3": {
|
| 95 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T3.2.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T3.2.1.1.1\">Experiment Settings</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S3.T3.2.1.1.2\">S3DIS</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"S3.T3.2.1.1.3\">ScanNetV2</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.2.1\">Feature Loss</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.2.2\">Label Loss</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.2.3\">mIoU</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.2.4\">Acc</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.2.5\">mIoU</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.2.2.2.6\">Acc</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T3.2.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.3.1.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.3.1.2\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.3.1.3\">49.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.3.1.4\">70.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.3.1.5\">53.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.3.1.6\">74.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.4.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.4.2.1\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.4.2.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.4.2.3\">9.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.4.2.4\">54.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.4.2.5\">8.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.4.2.6\">58.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T3.2.5.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T3.2.5.3.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T3.2.5.3.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T3.2.5.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.5.3.3.1\">52.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T3.2.5.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.5.3.4.1\">72.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S3.T3.2.5.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.5.3.5.1\">57.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T3.2.5.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.5.3.6.1\">75.9</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.3.1.1\">Table 3</span>: </span>Ablation experimental on loss functions.\n</figcaption>\n</figure>",
|
| 96 |
+
"capture": "Table 3: Ablation experimental on loss functions.\n"
|
| 97 |
+
},
|
| 98 |
+
"4": {
|
| 99 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T4.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T4.2.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_tt\" colspan=\"3\" id=\"S3.T4.2.1.1.1\">Experiment Settings</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"S3.T4.2.1.1.2\">Training Set</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T4.2.1.1.3\">Validation Set</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T4.2.2.2.1\">2D FE.</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.2.2.2.2\">3D FA.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.2.2.2.3\">Sn.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.2.2.2.4\">mIoU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.2.2.2.5\">Acc</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.2.2.2.6\">mIoU</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.2.2.7\">Acc</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T4.2.3.3.1\">\u2717</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.3.3.2\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.2.3.3.3\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.3.3.4\">27.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.2.3.3.5\">47.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.3.3.6\">28.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.3.3.7\">45.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T4.2.4.4.1\">\u2713</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.4.4.2\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.2.4.4.3\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.4.4.4\">32.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.2.4.4.5\">53.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.4.4.6\">31.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.4.4.7\">53.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.5.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T4.2.5.5.1\">\u2717</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.5.5.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.2.5.5.3\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.5.5.4\">46.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.2.5.5.5\">65.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.5.5.6\">47.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.5.5.7\">66.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T4.2.6.6.1\">\u2713</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.6.6.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.2.6.6.3\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.6.6.4\">52.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.2.6.6.5\">75.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.6.6.6\">52.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.6.6.7\">73.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T4.2.7.7.1\">\u2717</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.7.7.2\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.2.7.7.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.7.7.4\">36.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T4.2.7.7.5\">52.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.7.7.6\">31.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.2.7.7.7\">48.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T4.2.8.8.1\">\u2713</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.8.8.2\">\u2717</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.2.8.8.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.8.8.4\">40.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.2.8.8.5\">62.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.8.8.6\">38.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.8.8.7\">59.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.9.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S3.T4.2.9.9.1\">\u2717</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.9.9.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.2.9.9.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.9.9.4\">56.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T4.2.9.9.5\">75.8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.9.9.6\">51.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.2.9.9.7\">71.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.2.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S3.T4.2.10.10.1\">\u2713</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.2.10.10.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T4.2.10.10.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.2.10.10.4\">63.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T4.2.10.10.5\">79.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.2.10.10.6\">57.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T4.2.10.10.7\">75.9</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.3.1.1\">Table 4</span>: </span>Ablation experiments on ODP module.\nIn the table, \u201c2D FE.\u201d and \u201c3D FA.\u201d indicate the usage of 2D feature extraction and 3D feature aggregation module, separately.\n\u201cSn.\u201d indicates whether using student network in our model.\n</figcaption>\n</figure>",
|
| 100 |
+
"capture": "Table 4: Ablation experiments on ODP module.\nIn the table, \u201c2D FE.\u201d and \u201c3D FA.\u201d indicate the usage of 2D feature extraction and 3D feature aggregation module, separately.\n\u201cSn.\u201d indicates whether using student network in our model.\n"
|
| 101 |
+
}
|
| 102 |
+
},
|
| 103 |
+
"image_paths": {
|
| 104 |
+
"1": {
|
| 105 |
+
"figure_path": "2409.13982v1_figure_1.png",
|
| 106 |
+
"caption": "Fig. 1: \nThe overview of existing methods (Up) and the analysis of the problems with existing methods (Down).",
|
| 107 |
+
"url": "http://arxiv.org/html/2409.13982v1/x1.png"
|
| 108 |
+
},
|
| 109 |
+
"2": {
|
| 110 |
+
"figure_path": "2409.13982v1_figure_2.png",
|
| 111 |
+
"caption": "Fig. 2: \nThe proposed pipeline comprises two key module, which is described detailly in Section 2.",
|
| 112 |
+
"url": "http://arxiv.org/html/2409.13982v1/x2.png"
|
| 113 |
+
},
|
| 114 |
+
"3": {
|
| 115 |
+
"figure_path": "2409.13982v1_figure_3.png",
|
| 116 |
+
"caption": "Fig. 3: Some Semantic segmentation visualization results.",
|
| 117 |
+
"url": "http://arxiv.org/html/2409.13982v1/x3.png"
|
| 118 |
+
}
|
| 119 |
+
},
|
| 120 |
+
"validation": true,
|
| 121 |
+
"references": [
|
| 122 |
+
{
|
| 123 |
+
"1": {
|
| 124 |
+
"title": "\u201cA review of deep learning-based semantic segmentation for point cloud,\u201d",
|
| 125 |
+
"author": "Jiaying Zhang, Xiaoli Zhao, Zheng Chen, and Zhejun Lu,",
|
| 126 |
+
"venue": "IEEE access, vol. 7, pp. 179118\u2013179133, 2019.",
|
| 127 |
+
"url": null
|
| 128 |
+
}
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"2": {
|
| 132 |
+
"title": "\u201cSemantic segmentation of 3d lidar data using deep learning: a review of projection-based methods,\u201d",
|
| 133 |
+
"author": "Alok Jhaldiyal and Navendu Chaudhary,",
|
| 134 |
+
"venue": "Applied Intelligence, vol. 53, no. 6, pp. 6844\u20136855, 2023.",
|
| 135 |
+
"url": null
|
| 136 |
+
}
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"3": {
|
| 140 |
+
"title": "\u201cPointclustering: Unsupervised point cloud pre-training using transformation invariance in clustering,\u201d",
|
| 141 |
+
"author": "Fuchen Long, Ting Yao, Zhaofan Qiu, Lusong Li, and Tao Mei,",
|
| 142 |
+
"venue": "in CVPR, 2023, pp. 21824\u201321834.",
|
| 143 |
+
"url": null
|
| 144 |
+
}
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"4": {
|
| 148 |
+
"title": "\u201cU3ds3: Unsupervised 3d semantic scene segmentation,\u201d",
|
| 149 |
+
"author": "Jiaxu Liu, Zhengdi Yu, Toby P Breckon, and Hubert PH Shum,",
|
| 150 |
+
"venue": "arXiv preprint arXiv:2311.06018, 2023.",
|
| 151 |
+
"url": null
|
| 152 |
+
}
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"5": {
|
| 156 |
+
"title": "\u201cLearning transferable visual models from natural language supervision,\u201d",
|
| 157 |
+
"author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.,",
|
| 158 |
+
"venue": "in ICML. PMLR, 2021, pp. 8748\u20138763.",
|
| 159 |
+
"url": null
|
| 160 |
+
}
|
| 161 |
+
},
|
| 162 |
+
{
|
| 163 |
+
"6": {
|
| 164 |
+
"title": "\u201cResidual-based language models are free boosters for biomedical imaging,\u201d 2024.",
|
| 165 |
+
"author": "Zhixin Lai, Jing Wu, Suiyao Chen, Yucheng Zhou, and Naira Hovakimyan,",
|
| 166 |
+
"venue": null,
|
| 167 |
+
"url": null
|
| 168 |
+
}
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"7": {
|
| 172 |
+
"title": "\u201cOpenscene: 3d scene understanding with open vocabularies,\u201d",
|
| 173 |
+
"author": "Songyou Peng, Kyle Genova, Chiyu Jiang, Andrea Tagliasacchi, Marc Pollefeys, Thomas Funkhouser, et al.,",
|
| 174 |
+
"venue": "in CVPR, 2023, pp. 815\u2013824.",
|
| 175 |
+
"url": null
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"8": {
|
| 180 |
+
"title": "\u201cPointclip: Point cloud understanding by clip,\u201d",
|
| 181 |
+
"author": "Renrui Zhang, Ziyu Guo, Wei Zhang, Kunchang Li, Xupeng Miao, Bin Cui, Yu Qiao, Peng Gao, and Hongsheng Li,",
|
| 182 |
+
"venue": "in CVPR, 2022, pp. 8552\u20138562.",
|
| 183 |
+
"url": null
|
| 184 |
+
}
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"9": {
|
| 188 |
+
"title": "\u201c3dmv: Joint 3d-multi-view prediction for 3d semantic scene segmentation,\u201d",
|
| 189 |
+
"author": "Angela Dai and Matthias Nie\u00dfner,",
|
| 190 |
+
"venue": "in ECCV, 2018, pp. 452\u2013468.",
|
| 191 |
+
"url": null
|
| 192 |
+
}
|
| 193 |
+
},
|
| 194 |
+
{
|
| 195 |
+
"10": {
|
| 196 |
+
"title": "\u201cOpen-vocabulary semantic segmentation with mask-adapted clip,\u201d",
|
| 197 |
+
"author": "Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yinan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, and Diana Marculescu,",
|
| 198 |
+
"venue": "in CVPR, 2023, pp. 7061\u20137070.",
|
| 199 |
+
"url": null
|
| 200 |
+
}
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"11": {
|
| 204 |
+
"title": "\u201cPer-pixel classification is not all you need for semantic segmentation,\u201d",
|
| 205 |
+
"author": "Bowen Cheng, Alex Schwing, and Alexander Kirillov,",
|
| 206 |
+
"venue": "NIPS, vol. 34, pp. 17864\u201317875, 2021.",
|
| 207 |
+
"url": null
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"12": {
|
| 212 |
+
"title": "\u201cBrain tumor segmentation based on 3d residual u-net,\u201d",
|
| 213 |
+
"author": "Megh Bhalerao and Siddhesh Thakur,",
|
| 214 |
+
"venue": "in International MICCAI Brainlesion Workshop. Springer, 2019, pp. 218\u2013225.",
|
| 215 |
+
"url": null
|
| 216 |
+
}
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"13": {
|
| 220 |
+
"title": "\u201cScannet: Richly-annotated 3d reconstructions of indoor scenes,\u201d",
|
| 221 |
+
"author": "Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nie\u00dfner,",
|
| 222 |
+
"venue": "in CVPR, 2017, pp. 5828\u20135839.",
|
| 223 |
+
"url": null
|
| 224 |
+
}
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"14": {
|
| 228 |
+
"title": "\u201c3d semantic parsing of large-scale indoor spaces,\u201d",
|
| 229 |
+
"author": "Iro Armeni, Ozan Sener, Amir R Zamir, Helen Jiang, Ioannis Brilakis, Martin Fischer, and Silvio Savarese,",
|
| 230 |
+
"venue": "in CVPR, 2016, pp. 1534\u20131543.",
|
| 231 |
+
"url": null
|
| 232 |
+
}
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"15": {
|
| 236 |
+
"title": "\u201c4d spatio-temporal convnets: Minkowski convolutional neural networks,\u201d",
|
| 237 |
+
"author": "Christopher Choy, JunYoung Gwak, and Silvio Savarese,",
|
| 238 |
+
"venue": "in CVPR, 2019, pp. 3075\u20133084.",
|
| 239 |
+
"url": null
|
| 240 |
+
}
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"16": {
|
| 244 |
+
"title": "\u201cMix3d: Out-of-context data augmentation for 3d scenes,\u201d",
|
| 245 |
+
"author": "Alexey Nekrasov, Jonas Schult, Or Litany, Bastian Leibe, and Francis Engelmann,",
|
| 246 |
+
"venue": "in 3DV. IEEE, 2021, pp. 116\u2013125.",
|
| 247 |
+
"url": null
|
| 248 |
+
}
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"17": {
|
| 252 |
+
"title": "\u201cStratified transformer for 3d point cloud segmentation,\u201d",
|
| 253 |
+
"author": "Xin Lai, Jianhui Liu, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu, Xiaojuan Qi, and Jiaya Jia,",
|
| 254 |
+
"venue": "in CVPR, 2022, pp. 8500\u20138509.",
|
| 255 |
+
"url": null
|
| 256 |
+
}
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"18": {
|
| 260 |
+
"title": "\u201cMseg: A composite dataset for multi-domain semantic segmentation,\u201d",
|
| 261 |
+
"author": "John Lambert, Zhuang Liu, Ozan Sener, James Hays, and Vladlen Koltun,",
|
| 262 |
+
"venue": "in CVPR, 2020, pp. 2879\u20132888.",
|
| 263 |
+
"url": null
|
| 264 |
+
}
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"19": {
|
| 268 |
+
"title": "\u201cClip-fo3d: Learning free open-world 3d scene representations from 2d dense clip,\u201d",
|
| 269 |
+
"author": "Junbo Zhang, Runpei Dong, and Kaisheng Ma,",
|
| 270 |
+
"venue": "arXiv:2303.04748, 2023.",
|
| 271 |
+
"url": null
|
| 272 |
+
}
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"20": {
|
| 276 |
+
"title": "\u201cJoint 2d-3d-semantic data for indoor scene understanding,\u201d",
|
| 277 |
+
"author": "Iro Armeni, Sasha Sax, Amir R Zamir, and Silvio Savarese,",
|
| 278 |
+
"venue": "arXiv preprint arXiv:1702.01105, 2017.",
|
| 279 |
+
"url": null
|
| 280 |
+
}
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"21": {
|
| 284 |
+
"title": "\u201cGenerative zero-shot learning for semantic segmentation of 3d point clouds,\u201d",
|
| 285 |
+
"author": "Bj\u00f6rn Michele, Alexandre Boulch, Gilles Puy, Maxime Bucher, and Renaud Marlet,",
|
| 286 |
+
"venue": "in 3DV. IEEE, 2021, pp. 992\u20131002.",
|
| 287 |
+
"url": null
|
| 288 |
+
}
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"22": {
|
| 292 |
+
"title": "\u201cZero-shot point cloud segmentation by transferring geometric primitives,\u201d",
|
| 293 |
+
"author": "Runnan Chen, Xinge Zhu, Nenglun Chen, Wei Li, Yuexin Ma, Ruigang Yang, and Wenping Wang,",
|
| 294 |
+
"venue": "arXiv preprint arXiv:2210.09923, 2022.",
|
| 295 |
+
"url": null
|
| 296 |
+
}
|
| 297 |
+
}
|
| 298 |
+
],
|
| 299 |
+
"url": "http://arxiv.org/html/2409.13982v1"
|
| 300 |
+
}
|
20240921/2409.13984v1.json
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Cycle-Consistency Uncertainty Estimation for Visual Prompting based One-Shot Defect Segmentation",
|
| 3 |
+
"abstract": "Industrial defect detection traditionally relies on supervised learning models trained on fixed datasets of known defect types. While effective within a closed set, these models struggle with new, unseen defects, necessitating frequent re-labeling and re-training. Recent advances in visual prompting offer a solution by allowing models to adaptively infer novel categories based on provided visual cues. However, a prevalent issue in these methods is the over-confdence problem, where models\ncan mis-classify unknown objects as known objects with high certainty. To addresssing the fundamental concerns about the adaptability, we propose a solution to estimate uncertainty of the visual prompting process by cycle-consistency. We designed to check whether it can accurately restore the original prompt from its predictions. To quantify this, we measure the mean Intersection over Union (mIoU) between the restored\nprompt mask and the originally provided prompt mask. Without using complex designs or ensemble methods with multiple networks, our approach achieved a yield rate of 0.9175 in the VISION24 one-shot industrial challenge.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In the realm of industrial defect detection, supervised learning approaches[2 ###reference_b2###, 5 ###reference_b5###] have traditionally dominated due to their ability to leverage labeled datasets to train deep-learning models. These models perform impressively within the confines of a closed set, where the defect categories are pre-defined and remain constant. However, this paradigm faces significant challenges in real-world industrial environments, where new types of defects frequently emerge. The necessity to continuously label and incorporate these novel defects into the training dataset not only imposes a considerable burden but also limits the adaptability of traditional models to unforeseen defect types.\nRecent advancements in machine learning have highlighted the potential of visual prompting techniques[4 ###reference_b4###, 1 ###reference_b1###], which offer a promising alternative for handling such dynamic scenarios. Unlike conventional methods that are constrained by fixed labels, visual prompting enables models to dynamically adapt and infer categories not encountered during training. This approach utilizes prompt images\u2014visual cues provided during inference\u2014to guide the model\u2019s interpretation and classification of defects, thereby expanding its capability to handle previously unseen categories.\nHowever, a prevalent issue in these methods is the overconfdence problem, where models\ncan mis-classify unknown objects as known objects with high certainty[3 ###reference_b3###]. The key expectation from visual prompting is that it enables models to adaptively infer new categories. However, in practice, models often exhibit biases towards previously learned categories, raising fundamental concerns about the adaptability that visual prompting is supposed to offer. To address this issue of bias in visual prompting, we propose a solution where the model outputs a confidence score for the prompting process. We proposed to check whether it can accurately restore the original prompt from its predictions. If the model has inferred the relationship between the prompt and the query image without bias, it should be able to perform accurate reverse inference as well. To quantify this, we measure the mean Intersection over Union (mIoU) between the restored prompt mask and the originally provided prompt mask. This confidence score will help in assessing the reliability of the model\u2019s predictions and mitigate the problem of bias.\nBy integrating those approach into industrial defect segmentation tasks, we aim to address the inherent limitations of traditional supervised learning methods and visual prompting. The ability of visual prompting to generalize and adapt to new defect types without the need for extensive re-labeling and retraining aligns well with the challenges posed by continuous defect emergence in industrial settings. And further Cycle-consistency based uncertainty estimation enhance the visual prompting reliability."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Baseline Method",
|
| 15 |
+
"text": "Our baseline is Dinov[4 ###reference_b4###], which is a visual prompting method build on top of an encoder-decoder architecture. To effectively formulate visual prompts, they designed prompt encoder to encode reference visual prompts from the reference images and designed shared decoder to decode the final target visual prompts from the target image. They designed an additional prompt classifier to categorize objects within the target images into one of the reference categories. However, the embedding layer trained in this setup undergoes parameter updates that enforce contrastive learning among seen categories within the training set, inherently leading to a bias towards these seen categories."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Proposed method",
|
| 21 |
+
"text": "Visual prompting models often display biases towards previously learned categories, which raises fundamental concerns regarding the adaptability of visual prompting techniques. To address this issue, we propose to check if it can accurately restore the original prompt from its predictions.\nIf the model has inferred the relationship between the prompt and the query image without bias, it should be able to perform accurate reverse inference as well. To quantify this, we measure the mean Intersection over Union (mIoU) between the restored prompt mask and the originally provided prompt mask.\n###figure_1###"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Training",
|
| 27 |
+
"text": ""
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.1.1",
|
| 31 |
+
"parent_section_id": "3.1",
|
| 32 |
+
"section_name": "3.1.1 Image Encoder",
|
| 33 |
+
"text": "Using strong image feature extractor is a simple way to improve prediction accuracy. We employ modern archtecture Swin-L. We use publicly available pre-trained weights on COCO and ImageNet datasets."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1.2",
|
| 37 |
+
"parent_section_id": "3.1",
|
| 38 |
+
"section_name": "3.1.2 Data Augmentation",
|
| 39 |
+
"text": "In data augmentation policy, it is crucial to select methods carefully based on the characteristics of the data. In industrial inspection environments, there is generally variability in illumination but minimal changes in color. To reflect these characteristics, we applied random saturation, random brightness and random contrast with a range of [0.8, 1.2], and performed horizontal flipping of the original images with a probability of 0.5 for training"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Inference",
|
| 45 |
+
"text": "Many approaches in almost challenge rely on ensemble methods using multiple modes to achieve high performance on the given test set; however, we did not employ ensemble techniques due to resource constraints to train multiple models. Instead, our goal was to obtain reliable output in a single visual prompting model by estimating the confidence score, which defines how trustworthy the inference of visual prompting model are during the inference stage."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2.1",
|
| 49 |
+
"parent_section_id": "3.2",
|
| 50 |
+
"section_name": "3.2.1 Forward phase",
|
| 51 |
+
"text": "As shown in Fig. 1, given a support image with its corresponding prompt mask and a query image, the goal of the forward phase is to identify the regions in the query image that correspond to the prompt. In the context of segmentation, this process results in the generation of a mask map and probability corresponding to the query image."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2.2",
|
| 55 |
+
"parent_section_id": "3.2",
|
| 56 |
+
"section_name": "3.2.2 Reverse phase",
|
| 57 |
+
"text": "In reverse phase, prompting inference is conducted in reverse. The query image and the generated mask are treated as the support image and support mask, respectively, while the original support image is considered as the query image. This approach allows for prompting inference to generate a mask and corresponding to the pseudo query image."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2.3",
|
| 61 |
+
"parent_section_id": "3.2",
|
| 62 |
+
"section_name": "3.2.3 Confidence Estimation",
|
| 63 |
+
"text": "Subsequently, the mIoU between the original support mask and the support mask predicted during the reverse phase is computed to quantify whether the model has made unbiased predictions in both the forward and reverse phases. Top1 score for each image in both forward and reverse phase then weight the mIoU score. Formally, this is represented as follows:\nwhere and represent top-1 score among the matching scores, while and denote the corresponding mask map.\nIt is notable that existing visual prompting methods exploit score for inference that increases the number of false positives.\n###figure_2### ###figure_3###"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "Experiments",
|
| 69 |
+
"text": ""
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.1",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Dataset and Evaluation Metric",
|
| 75 |
+
"text": "VISION24 one-shot industrial inspection dataset consists of 2024 images for training, 2000 number of support, query pairs for testing. It aims to address the unique data imbalance bottleneck of vision-based industrial inspection by tapping into the potential of reference learning through appropriate visual prompts. Featuring 5 categories of products from diverse domains, this dataset contain 3 groups of defects: known, unknown and unseen defects in the final test set. Rigorously designed evaluation metric evaluates the accuracy of a solution in two key aspects: the positive pair catch rate and the negative pair yield rate. A positive pair is deemed a good catch if the IoU between the predicted mask and the ground truth is greater than or equal to 0.3. For negative pairs, a pair is considered a correct yield if the response rate in the prediction is lower than the pre-defined threshold."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.2",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "Implementation Details",
|
| 81 |
+
"text": "The training set contains five categories: Cable, Cylinder, PCB, Screw, and Wood, each consisting of one or more defects. For example, the Cable category includes two defects: thunderbolt and torn-apart. We considered categories with different defects, even if they belong to the same main category, as independent classes. Thus, a total of 12 independent classes were defined in the training set. The training images are resized by and augmented by horizontal flip, random brightness in the range of [0.8, 1.2], random contrast in the range of [0.8, 1.2] and the random saturation in the range of [0.8, 1.2]. The DINOv network is trained with a batch size 64 on 8 GPUs for 20K iterations using AdamW optimizer. All the other settings follow DINOv official implementation.\nWhen the was greater than 0.18, we trusted and used the mask map predicted by the visual prompting model. When the was below 0.18, was converted to a null mask, which led to a tendency for a somewhat lower catch rate while improving yield rate. To improve the catch rate, we used a DINOv model officially pre-trained on COCO and SAM data for this range. By applying cycle-consistency-based uncertainty estimation in the same manner as the DINOv model, we considered the model\u2019s prediction as a null mask when was below 0.015, and trusted the predicted mask map when was above 0.015."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.3",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "Analysis",
|
| 87 |
+
"text": "For quantitative evaluation, we assessed the method on the final test set of the challenge and observed the performance shown in Table 1. The proposed method achieved a high yield rate without any specialized network design. This result is quantitatively confirmed by the substantial reduction in false positives due to the Cycle-consistency-based uncertainty estimation. It is anticipated that training multiple models and employing ensemble techniques could further enhance the catch rate by capturing a more diverse feature space.\nFor qualitative evaluation, we analyzed the results of the forward and reverse phases on the support and query data of the test set. As shown in Fig. 2, in some cases, the support mask was not accurately restored due to model bias, and the score was lower than the pre-defined threshold, which led the model to correctly convert the predicted mask to a null mask. In the case of the \u2019Cable\u2019\nexample, the value is 0.977, indicating that the model predicted the mask very confidently. However, the mIoU between the restored support mask and the ground truth was measured at 0.048, which is very low.\nOn the other hand, as shown in Fig. 3, in cases where the predictions were accurate, the model correctly restored the support mask with high mIoU through both the forward and reverse phases. In these samples, the support mask was accurately restored with high mIoU, and the score was higher than the pre-defined threshold, leading the model to consider the predicted mask.\n###table_1###"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Conclusion",
|
| 93 |
+
"text": "Recent advancements in visual prompting allow models to adaptively infer novel categories based on visual cues. However, a common issue with these methods is over-confidence, where models may misclassify unknown objects as known ones with high certainty. To address these concerns about adaptability, we propose a solution that estimates the uncertainty in the visual prompting process through cycle-consistency. Our method involves verifying whether the original prompt can be accurately restored from its predictions. We quantify this by measuring the mean Intersection over Union (mIoU) between the restored prompt mask and the original prompt mask. Experimental analysis demonstrated that false positive masks with high prediction scores could be corrected through cycle-consistency-based uncertainty estimation. Additionally, without employing complex designs or ensemble methods with multiple networks, our approach achieved a yield rate of 0.9175 in the VISION24 one-shot industrial challenge."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [],
|
| 97 |
+
"tables": {
|
| 98 |
+
"1": {
|
| 99 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S4.T1.3.2\" style=\"font-size:90%;\">The proposed method achieved a high yield rate without requiring any special network designs or complex ensemble structures. This is quantitatively validated by the significant reduction in false positives due to the Cycle-consistency-based uncertainty estimation. </span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.1.1.1\">Model</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.1.1.2\">Catch rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.1.1.3\">Yield rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.1.1.4\">PES</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.4.2.2.1\">ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.4.2.2.2\">0.77500</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.4.2.2.3\">0.91750</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T1.4.2.2.4\">0.84625</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 100 |
+
"capture": "Table 1: The proposed method achieved a high yield rate without requiring any special network designs or complex ensemble structures. This is quantitatively validated by the significant reduction in false positives due to the Cycle-consistency-based uncertainty estimation. "
|
| 101 |
+
}
|
| 102 |
+
},
|
| 103 |
+
"image_paths": {
|
| 104 |
+
"1": {
|
| 105 |
+
"figure_path": "2409.13984v1_figure_1.png",
|
| 106 |
+
"caption": "Figure 1: Given a support image with its corresponding prompt mask ms and a query image, the goal of the forward phase is\nto identify the regions in the query image that correspond to the prompt. In the\ncontext of segmentation, this process results in the generation of a mask map\nmfsubscript\ud835\udc5a\ud835\udc53m_{f}italic_m start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT and probability pfsubscript\ud835\udc5d\ud835\udc53p_{f}italic_p start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT corresponding to the query image.\nIn reverse phase, prompting inference is conducted in reverse.\nThe query image and the generated mask mfsubscript\ud835\udc5a\ud835\udc53m_{f}italic_m start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT are treated as the support image\nand support mask, respectively, while the original support image is considered\nas the query image. This approach allows for prompting inference to generate a\nmask mrsubscript\ud835\udc5a\ud835\udc5fm_{r}italic_m start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and prsubscript\ud835\udc5d\ud835\udc5fp_{r}italic_p start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT corresponding to the pseudo query image. Subsequently, the mIoU between the original support\nmask and the support mask predicted during the reverse phase is computed to\nquantify whether the model has made unbiased predictions in both the forward\nand reverse phases.",
|
| 107 |
+
"url": "http://arxiv.org/html/2409.13984v1/extracted/5869539/architecture_v1.png"
|
| 108 |
+
},
|
| 109 |
+
"2": {
|
| 110 |
+
"figure_path": "2409.13984v1_figure_2.png",
|
| 111 |
+
"caption": "Figure 2: Examples of correct-yield samples corrected by Cycle Consistency-based uncertainty estimation. The red mask in the top left represents the support image and its corresponding ground truth mask map. The bottom left shows the query image. The green mask in the bottom right indicates the query mask inferred through the forward phase, while the blue mask in the top right represents the support mask restored through the reverse phase. In these samples, the support mask was not accurately restored due to model bias, and the pcsubscriptpc\\textit{p}_{\\textit{c}}p start_POSTSUBSCRIPT c end_POSTSUBSCRIPT score was lower than the pre-defined threshold, leading the model to convert predicted mask mfsubscriptmf\\textit{m}_{\\textit{f}}m start_POSTSUBSCRIPT f end_POSTSUBSCRIPT to null mask.\nIn the case of the \u2019Cable\u2019 example, the pfsubscriptpf\\textit{p}_{\\textit{f}}p start_POSTSUBSCRIPT f end_POSTSUBSCRIPT value is 0.977, indicating that the model predicted the mask very confidently. However, the mIoU between the restored support mask and the ground truth was measured at 0.048, which is very low.",
|
| 112 |
+
"url": "http://arxiv.org/html/2409.13984v1/extracted/5869539/fails.png"
|
| 113 |
+
},
|
| 114 |
+
"3": {
|
| 115 |
+
"figure_path": "2409.13984v1_figure_3.png",
|
| 116 |
+
"caption": "Figure 3: Examples of good-catch samples. The red mask in the top left represents the support image and its corresponding ground truth mask map. The bottom left shows the query image. The green mask in the bottom right indicates the query mask inferred through the forward phase, while the blue mask in the top right represents the support mask restored through the reverse phase. In these samples, the support mask was accurately restored with high mIoU, and the pcsubscriptpc\\textit{p}_{\\textit{c}}p start_POSTSUBSCRIPT c end_POSTSUBSCRIPT score was higher than the pre-defined threshold, leading the model to consider predicted mfsubscriptmf\\textit{m}_{\\textit{f}}m start_POSTSUBSCRIPT f end_POSTSUBSCRIPT as correct.",
|
| 117 |
+
"url": "http://arxiv.org/html/2409.13984v1/extracted/5869539/good.png"
|
| 118 |
+
}
|
| 119 |
+
},
|
| 120 |
+
"validation": true,
|
| 121 |
+
"references": [],
|
| 122 |
+
"url": "http://arxiv.org/html/2409.13984v1"
|
| 123 |
+
}
|