Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- 20240323/2007.01359v3.json +0 -0
- 20240323/2011.12340v4.json +448 -0
- 20240323/2110.10494v2.json +263 -0
- 20240323/2111.08136v2.json +125 -0
- 20240323/2202.04476v3.json +424 -0
- 20240323/2202.12074v2.json +0 -0
- 20240323/2205.11100v2.json +0 -0
- 20240323/2206.08898v2.json +0 -0
- 20240323/2207.04913v2.json +262 -0
- 20240323/2208.08270v4.json +0 -0
- 20240323/2209.01426v2.json +89 -0
- 20240323/2212.09963v2.json +256 -0
- 20240323/2301.06627v3.json +0 -0
- 20240323/2301.10956v4.json +676 -0
- 20240323/2301.12528v2.json +227 -0
- 20240323/2302.10681v4.json +0 -0
- 20240323/2303.01656v2.json +0 -0
- 20240323/2304.00263v2.json +121 -0
- 20240323/2304.14670v2.json +0 -0
- 20240323/2305.15253v2.json +0 -0
- 20240323/2305.16582v2.json +0 -0
- 20240323/2306.04212v2.json +0 -0
- 20240323/2306.16788v3.json +0 -0
- 20240323/2308.11585v2.json +0 -0
- 20240323/2309.01157v2.json +0 -0
- 20240323/2309.03103v2.json +291 -0
- 20240323/2309.04937v3.json +250 -0
- 20240323/2309.06380v2.json +0 -0
- 20240323/2309.08865v3.json +221 -0
- 20240323/2309.09469v2.json +369 -0
- 20240323/2309.09574v2.json +0 -0
- 20240323/2309.10062v2.json +143 -0
- 20240323/2309.14552v2.json +146 -0
- 20240323/2309.14945v2.json +624 -0
- 20240323/2310.05261v2.json +105 -0
- 20240323/2310.08446v2.json +0 -0
- 20240323/2310.09725v3.json +0 -0
- 20240323/2310.15106v4.json +503 -0
- 20240323/2310.18847v2.json +356 -0
- 20240323/2311.03326v2.json +76 -0
- 20240323/2311.07954v2.json +0 -0
- 20240323/2311.10959v3.json +0 -0
- 20240323/2311.13231v3.json +0 -0
- 20240323/2311.15383v2.json +0 -0
- 20240323/2312.02923v2.json +0 -0
- 20240323/2312.06203v2.json +129 -0
- 20240323/2312.07527v2.json +235 -0
- 20240323/2312.14481v2.json +0 -0
- 20240323/2401.08154v3.json +225 -0
- 20240323/2401.08503v3.json +0 -0
20240323/2007.01359v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2011.12340v4.json
ADDED
|
@@ -0,0 +1,448 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "mForms : Multimodal Form-Filling with Question Answering",
|
| 3 |
+
"abstract": "This paper presents a new approach to form-filling by reformulating the task as multimodal natural language Question Answering (QA). The reformulation is achieved by first translating the elements on the GUI form (text fields, buttons, icons, etc.) to natural language questions, where these questions capture the element\u2019s multimodal semantics. After a match is determined between the form element (Question) and the user utterance (Answer), the form element is filled through a pre-trained extractive QA system. By leveraging pre-trained QA models and not requiring form-specific training, this approach to form-filling is zero-shot. The paper also presents an approach to further refine the form-filling by using multi-task training to incorporate a potentially large number of successive tasks. Finally, the paper introduces a multimodal natural language form-filling dataset Multimodal Forms (mForms), as well as a multimodal extension of the popular ATIS dataset to support future research and experimentation. Results show the new approach not only maintains robust accuracy for sparse training conditions but achieves state-of-the-art F1 of 0.97 on ATIS with approximately 1/10th the training data.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "The last decade has seen the development and broad deployment of digital assistants (DAs) including Siri, Cortana, Alexa, Google Assistant, and Bixby. A primary component of DAs is Natural Language Understanding (NLU) - understanding the meaning of the user\u2019s utterance. Referring to Figure 1 ###reference_###, the NLU task determines the domain of the user\u2019s request (e.g., travel), the user\u2019s intent (e.g., find_flight) and information-bearing parameters commonly referred to as semantic slots (e.g., City-departure, City-arrival, and Date). The task of determining the semantic slots is called slot filling (Tur and De Mori 2011 ###reference_b31###). In this paper, we address a related but distinct task - form-filling, where the DA processes the user requests to act on form elements (fill text fields, click buttons and icons, etc.) on Mobile Apps or web pages. Equipping DAs with the ability to simultaneously parse visual semantic information and contextual dialogue enhances their ability to understand and act on information across multiple modalities. This type of multimodal interaction through conversations is currently an open problem and an active area of research (Sundar and Heck 2022 ###reference_b29###).\nEarly methods for the related area of semantic slot filling used recurrent neural networks (RNNs) (Mesnil et al. 2014 ###reference_b22###), then progressed to long short-term memory (LSTM) neural networks (Liu and Lane 2016 ###reference_b19###), and more recently transformer-based approaches (Chen, Zhuo, and Wang 2019 ###reference_b4###).\nDynamic deep learning approaches, while achieving high slot-filling accuracy, demand extensive domain-specific supervised training data. This poses challenges for applications with limited access to extensive data, such as AI skill development for DAs, limiting the broad expansion of AI skills to adequately cover the long tail of user goals and intents.\n###figure_1### Prior work has focused on developing models and approaches that require less supervised training data. Zero and few-shot learning methods have been developed across NLP tasks (Dauphin et al. 2013 ###reference_b5###; Yann et al. 2014 ###reference_b36###; Upadhyay et al. 2018 ###reference_b33###). Methods can be broadly categorized into transfer learning (Jaech, Heck, and Ostendorf 2016 ###reference_b14###; El-Kahky et al. 2014 ###reference_b8###; Hakkani-T\u00fcr et al. 2016 ###reference_b11###), sequential learning (Bapna et al. 2017a ###reference_b1###), reinforcement learning (Liu et al. 2017 ###reference_b20###; Kumar et al. 2017 ###reference_b15###; Shah, Hakkani-T\u00fcr, and Heck 2016 ###reference_b27###) and synthetic training (Xu et al. 2020 ###reference_b35###; Campagna et al. 2020 ###reference_b3###).\nIn many cases, the user interacts with an App screen or web page and, therefore, uses multiple modalities such as voice, vision, and/or touch (Heck et al. 2013 ###reference_b12###; Hakkani-T\u00fcr et al. 2014 ###reference_b10###; Li et al. 2019 ###reference_b18###; Selvaraju et al. 2019 ###reference_b26###; Zhang et al. 2020 ###reference_b38###; Xu et al. 2021 ###reference_b34###; Reichman et al. 2023 ###reference_b25###; Zhang et al. 2019 ###reference_b37###; Sundar and Heck 2023 ###reference_b30###; Reichman and Heck 2023 ###reference_b24###; Sundar, Richardson, and Heck 2024 ###reference_b28###). For these settings, zero- and few-shot learning can be achieved by leveraging the semantics contained in the screen. In (Bapna et al. 2017b ###reference_b2###), the authors incorporated visual slot names or descriptions in a domain-agnostic slot tagging model called a Concept Tagger. The Concept Tagger models the visual slot description (e.g. \u201cdestination\u201d) as a Bag-of-Words (BOW) embedding vector and injects a Feed-Forward network inside the original deep LSTM network to process the user\u2019s utterance (e.g., \u201cGet a cab to 1945 Charleston\u201d). Results showed the inclusion of slot descriptions significantly outperformed the previous state-of-the-art multi-task transfer learning approach (Hakkani-T\u00fcr et al. 2016 ###reference_b11###).\nThe Concept Tagger (Bapna et al. 2017b ###reference_b2###) is limited in several ways. First, the BOW semantic representation of the visual slot description is static and does not model the dynamics of the description language. Second, the method is limited to only visual slots with text descriptions and does not incorporate other semantic information from the visual elements (i.e., is the element a form field or a radio button with choices). Third, the Concept Tagger incorporates multi-task learning only through the visual slot description.\nThis paper111Update of arXiv preprint (Heck and Heck 2020 ###reference_b13###). addresses all three limitations of the Concept Tagger. To address these limitations, the next section describes a new approach that formulates multimodal form-filling as Question Answering (QA). This approach also extends more recent work on text-based slot filling as QA (Levy et al. 2017 ###reference_b17###; Du et al. 2021 ###reference_b7###; Fuisz et al. 2022 ###reference_b9###) by developing a much broader, multimodal computer vision-based approach. The extension to a multimodal approach is required in form-filling where the QA formulation must cover all of the 25 UI component categories, 197 text button concepts, and 99 icon classes.\nIn the Experiments Section, we introduce a new corpus collected for multimodal form-filling called the Multimodal Forms (mForms) dataset as well as an extension of the ATIS (Tur, Hakkani-T\u00fcr, and Heck 2010 ###reference_b32###) dataset as a simulated form-filling task. We compare the new zero-shot multimodal form-filling QA approach to competing methods on this new corpora. Finally, we summarize our findings and suggest the next steps in the Conclusions and Future Work Section."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Approach",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Multimodal form-filling",
|
| 21 |
+
"text": "The foundation of the approach presented in this paper is the utilization of deeper semantics in the visual representation of the form on the user\u2019s screen. While previous form-filling methods treated the form label as a classification tag with no semantic information, the approach of this paper extracts meaning from the visual slot representation. By formulating the form field description as a Question and the user\u2019s utterance as the Paragraph, we can directly utilize transformer-based extractive Question Answering (QA) models (Lan et al. 2019 ###reference_b16###). The Start/End Span of the extracted Answer is used to fill the appropriate content in the web form. We call our approach Multimodal form-filling as Question Answering (QA), which we will henceforth refer to by mForms as QA.\n###figure_2### In addition to the lexical semantics contained in the text field description, the type of the visual graphical user interface (GUI) element on the App or web page provides additional semantic information. The set of GUI design elements of a mobile App that are available to translate into questions are shown in Figure 2 ###reference_###. In our approach, the GUI design elements are automatically classified via a convolutional deep neural network computer vision system trained on the RICO dataset as shown in Figure 9 ###reference_### of the Appendix (Deka et al. 2017 ###reference_b6###; Liu et al. 2018 ###reference_b21###). The computer vision classifier identifies 25 UI component categories (e.g., Ads, Checkboxes, On/Off switches, Radio Buttons), 197 text button concepts (e.g., login, back, add/delete/save/continue), and 99 icon classes (e.g., forward/backward, dots, plus symbol). Our implementation as described in (Liu et al. 2018 ###reference_b21###) has a 94% classification accuracy.\n###figure_3### In mForms, rules are used to translate each GUI design element into an appropriate question. Each type of GUI design element has a unique rule type that triggers depending on its visual presence on the GUI. Figure 3 ###reference_### shows an example of these GUI elements, their associated rule templates, and example questions and user utterances. If multiple GUI design elements are visible, then multiple translation rules fire, generating simultaneous questions to be paired with the user\u2019s utterance.\nFor example, GUI elements that are classified as simple text fields trigger a rule that generates a question template \u201cWhat is the Text_Field?\u201d. Figure 3 ###reference_### shows simple text fields in the Michaelsoft Vehicle Logger App. The first text field is \u201cVehicle\u201d. In this case, the rule recognizes command and generates a question \u201cWhat is the vehicle type?\u201d. Given a user utterance \u201cPlease track my business trip using GPS which I will take in my Toyota Prius.\u201d, the Question-Answering system extracts the answer to this question as \u201cToyota Prius\u201d."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Single- and Multi-Task Training",
|
| 27 |
+
"text": "Our mForms as QA method is shown in Figure 4 ###reference_###. It can be formulated as both single- and multi-task training.\nThe Single-task (ST) model is initialized as a general-purpose QA trained with SQuAD2 (Rajpurkar, Jia, and Liang 2018 ###reference_b23###). Used as-is, this model is zero-shot for form-filling. The model can be fine-tuned with supervised (annotated) form-filling data from the visual App or web page GUI.\n###figure_4### In contrast to Single-task training, Multi-task (MT) training incorporates form-filling training sets across multiple tasks with each training further refining the model. Similar tasks represented by common domains can be grouped for successive fine-tuning stages. For example, flight reservation form-filling Apps could be successively refined using the first N1 Apps with the Nth App used as the final fine-tuning stage. The potential advantage of the MT approach is the required amount of annotated supervised training data becomes less with each new task refinement stage."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Experiments",
|
| 33 |
+
"text": ""
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Setup",
|
| 39 |
+
"text": "Our base QA system is based on the Pytorch implementation of ALBERT (Lan et al. 2019 ###reference_b16###) 222https://github.com/huggingface/transformers. We use the pre-trained LM weights in the encoder module trained with all the official hyper-parameters333ALBERT (xxlarge)."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Multimodal Forms Dataset",
|
| 45 |
+
"text": "Amazon Mechanical Turk (AMT) was used to collect Multimodal Forms (mForms) - a dataset to support multimodal form-filling research444https://huggingface.co/avalab. The AMT crowd workers were asked to formulate requests to mobile App screens from three Apps in the RICO dataset: Vehicle Logger from Michaelsoft, United Airlines flight search, and Trip Advisor. The UIs of each App with GUI elements semantically annotated by the computer vision system described earlier are available online.555http://interactionmining.org/rico More details on the mForms dataset are in given in the Appendix."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Simulated ATIS Form-Filling Dataset",
|
| 51 |
+
"text": "The ATIS dataset is a widely used NLU benchmark for users interacting through natural language with a flight booking system (Tur, Hakkani-T\u00fcr, and Heck 2010 ###reference_b32###). To use ATIS for mForms as QA, we extended the dataset in several ways. First, as shown in Figure 5 ###reference_###, each slot is treated as a simulated visual Text Field where the information of the slot tag is displayed in an App with a simple form. As is the case with Text Fields, each slot was reformulated as a natural language Question. For example, the ATIS slot tag \u201cB-aircraft_code\u201d is translated into the question \u201cWhat is the aircraft code?\u201d. This modified dataset will be called \u201cATIS form-filling\u201d.\n###figure_5### Table 1 ###reference_### summarizes the three Visual App datasets as well as ATIS form-filling with example utterances from each dataset. Table 6 ###reference_### in the Appendix shows the types of slots annotated in each dataset. ATIS form-filling has the largest number of slot types at 83.\nTable 2 ###reference_### summarizes F1 scores (harmonic mean of precision and recall) on the 3 new mForms datasets and the ATIS form-filling dataset. For comparison, the F1 score is given for the joint slot and intent model (JB) given in (Chen, Zhuo, and Wang 2019 ###reference_b4###). The new mForms as QA approach presented in this paper consistently outperforms the JB slot filler. While the JB slot filler requires at least 100 training samples on the Vehicle Logger App, the mForms as QA approach maintains the F1 score even for only 0 and 5 training samples. These results suggest the semantic information contained in the mForms is particularly important for sparse training conditions.\nWith more training data, is interesting to note that the accuracy of the new mForms as QA approach also achieves one of the best published F1 measures at 0.97 on the ATIS dataset. For comparison, the mForms as QA system was only trained on 500 samples for this case as compared to the full training set of ATIS at over 4400 samples. This suggests that the injection of simulated mForm and the subsequent generation of questions for the QA system is effective at reducing the amount of training data required to yield high accuracy. This characteristic of mForms as QA makes the approach especially attractive for commercial digital assistants given the industry\u2019s reliance on third-party developers who are often not highly skilled in NLU.\nTo examine the effect of visual semantics in a more controlled experiment, questions generated in the new mForms as QA approach were replaced with tag symbols, where the tag symbol had no semantic information (e.g., \u201cXYZ\u201d). Otherwise, \u201cNo Visuals\u201d is the same model as mForms as QA. Results for the Vehicle Logger App are shown in Table 3 ###reference_### comparing the No Visuals approach to two conditions from the mForms as QA approach (1) Text Only - all visuals are treated as simple Text Fields and other GUI elements are ignored (2) All GUI elements are used. The training samples were randomly chosen across all 10 slot types from the complete set of 500 utterances. Larger differences are observed in sparse training conditions where the No Visuals approach largely falters.\nGiven the mForms as QA approach incorporates multi-task (MT) training, an interesting question to answer is whether the MT training transfers knowledge across domains. Table 4 ###reference_### shows results for the cross-domain case: fine-tuning on the ATIS form-filling dataset followed by another iteration of fine-tuning on data from the Vehicle Logger App. The effect of MT is more pronounced in the sparse training cases with an improvement from 0.48 F1 to 0.52 F1 at zero-shot training and an improvement of 0.46 F1 to 0.60 F1 with 5 training samples. These results suggests cross-domain concept learning is occurring for these training conditions.\nFinally, Table 5 ###reference_### shows zero-shot F1 scores on the mForms datasets for our new approach when varying the number of visual GUI elements that are displayed to the user. For example, when 2 visual elements are displayed, the model must not only parse the slots from the utterance for one of the visual elements but also correctly reject the filling of slots into the other element. For the mForms datasets, the models degrade gracefully. This robustness is likely the result of the initial fine-tuning on the SQuAD2 dataset which is trained to reject false questions - questions that do not have a correct answer to extract from the given Paragraph."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Conclusions and Future Work",
|
| 57 |
+
"text": "This paper presented a new approach to filling GUI forms by reformulating the problem as a multimodal natural language\nQuestion Answering (QA) task.\nThe reformulation is achieved by first translating the elements on the GUI form (text\nfields, buttons, icons, etc.) to natural language questions, where these questions capture the element\u2019s multimodal\nsemantics. These questions are paired with the user\u2019s utterance and answers are extracted from the utterance using a Transformer-based Question-Answering system. An approach to further refine the model is presented that facilitates transfer learning across a large number of tasks.\nThe paper also presented mForms - a multimodal form-filling dataset to support future research.\nResults show the new approach not only maintains robust accuracy for sparse training conditions but achieves state-of-the-art F1 of 0.97 on ATIS with approximately 1/10th of the training data.\nFuture work will extend mForms as QA to a broader set of visual GUI screens across both mobile Apps and web pages. In addition, we plan to explore improved rejection methods for screens with high-density competing visual GUI elements. Lastly, while mForms as QA uses a BERT-based architecture for comparison with prior work, future work will explore ways to leverage generative models such as GPT3.5-4/T5/BART."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "Appendix",
|
| 63 |
+
"text": "This section outlines the applications used to collect the mForms dataset. The slot schemas for each form are shown in Table 6 ###reference_###.\nVehicle Logger: The Vehicle Logger App shown in Figure 6 ###reference_### is a popular tool to create, share, and report vehicle log books for mileage, fuel expenses, and tax purposes. As previously described, the visual GUI elements of the Vehicle Logger App include Text fields (e.g., Odometer Value), Radio Buttons (e.g., Business, Personal, Other), and Text Buttons (e.g., Track distance with GPS). Referring to Table 1 ###reference_###, 850 utterances were collected with annotations according to 10 slot types.\nUnited Airlines: The United Airlines flight search App shown in Figure 7 ###reference_### is used to find flights according to travel plans and preferences. The GUI elements include simple text fields, tab buttons, and search buttons as well as more visually-oriented icons such as the user\u2019s current location (icons on the right-most column) and an icon to swap departure and arrival airports. 850 utterances were collected with annotations according to 6 slot types.\nTrip Advisor: Finally, Figure 8 ###reference_### shows the Trip Advisor App. This App serves many purposes including booking a table at restaurants as well as comparing prices when booking flights and hotels. The portion of the App used for this study focused on hotel room booking. Much of the App screen shown in the Figure contains visually oriented icons such as the symbol for people (in this case, showing 2 people) and a bed (1 bed in the room). The Trip Advisor dataset has 803 utterances with annotations according to 6 slots.\n###figure_6### ###figure_7### ###figure_8### ###figure_9###"
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [],
|
| 67 |
+
"tables": {
|
| 68 |
+
"1": {
|
| 69 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T1.1.1.1.1\">Visual App</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T1.1.1.1.2\">Sample Utterance</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T1.1.1.1.3\"># Utterances</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"Sx3.T1.1.2.1.1\">Vehicle Logger</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"Sx3.T1.1.2.1.2\">\u201cPlease activate GPS tracking and log my car trip\u201d</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T1.1.2.1.3\">850</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"Sx3.T1.1.3.2.1\">United</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"Sx3.T1.1.3.2.2\">\u201cBook a Flight from California to arizona on august 15th 2020\u201d</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.3.2.3\">850</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"Sx3.T1.1.4.3.1\">ATIS form-filling</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"Sx3.T1.1.4.3.2\">\u201cI live in Denver and I\u2019d like to make a trip to Pittsburgh\u201d</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T1.1.4.3.3\">4478</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"Sx3.T1.1.5.4.1\">Trip Advisor</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"Sx3.T1.1.5.4.2\">\u201cPlease book a 5 star hotel in Atlanta Georgia\u201d</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T1.1.5.4.3\">803</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Sample utterances from each domain</figcaption>\n</figure>",
|
| 70 |
+
"capture": "Table 1: Sample utterances from each domain"
|
| 71 |
+
},
|
| 72 |
+
"2": {
|
| 73 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx3.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"Sx3.T2.1.1.1.1\"># train samples</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"Sx3.T2.1.1.1.2\">0</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"Sx3.T2.1.1.1.3\">5</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"Sx3.T2.1.1.1.4\">50</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" colspan=\"2\" id=\"Sx3.T2.1.1.1.5\">100</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"Sx3.T2.1.1.1.6\">500</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx3.T2.1.2.2.1\">Domain</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T2.1.2.2.2\">JB</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"Sx3.T2.1.2.2.3\">mForms</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T2.1.2.2.4\">JB</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"Sx3.T2.1.2.2.5\">mForms</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T2.1.2.2.6\">JB</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"Sx3.T2.1.2.2.7\">mForms</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T2.1.2.2.8\">JB</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"Sx3.T2.1.2.2.9\">mForms</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T2.1.2.2.10\">JB</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx3.T2.1.2.2.11\">mForms</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx3.T2.1.3.1.1\">Vehicle Logger</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T2.1.3.1.2\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T2.1.3.1.3\">0.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T2.1.3.1.4\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T2.1.3.1.5\">0.46</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T2.1.3.1.6\">0.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T2.1.3.1.7\">0.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T2.1.3.1.8\">0.45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T2.1.3.1.9\">0.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T2.1.3.1.10\">0.78</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T2.1.3.1.11\">0.87</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"Sx3.T2.1.4.2.1\">ATIS form-filling</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.4.2.2\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T2.1.4.2.3\">0.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.4.2.4\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T2.1.4.2.5\">0.74</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.4.2.6\">0.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T2.1.4.2.7\">0.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.4.2.8\">0.77</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T2.1.4.2.9\">0.93</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.4.2.10\">0.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.4.2.11\">0.97</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"Sx3.T2.1.5.3.1\">United</th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.5.3.2\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T2.1.5.3.3\">0.40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.5.3.4\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T2.1.5.3.5\">0.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.5.3.6\">0.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T2.1.5.3.7\">0.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.5.3.8\">0.44</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T2.1.5.3.9\">0.72</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.5.3.10\">0.51</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T2.1.5.3.11\">0.74</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T2.1.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"Sx3.T2.1.6.4.1\">Trip Advisor</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T2.1.6.4.2\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T2.1.6.4.3\">0.52</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T2.1.6.4.4\">0.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T2.1.6.4.5\">0.47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T2.1.6.4.6\">0.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T2.1.6.4.7\">0.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T2.1.6.4.8\">0.53</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T2.1.6.4.9\">0.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T2.1.6.4.10\">0.59</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T2.1.6.4.11\">0.66</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Weighted token F1 (harmonic mean of precision and recall) scores. The table shows the baseline (JB) as detailed in <cite class=\"ltx_cite ltx_citemacro_citep\">(Chen, Zhuo, and Wang <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2011.12340v4#bib.bib4\" title=\"\">2019</a>)</cite> versus our new mForms as QA approach.</figcaption>\n</figure>",
|
| 74 |
+
"capture": "Table 2: Weighted token F1 (harmonic mean of precision and recall) scores. The table shows the baseline (JB) as detailed in (Chen, Zhuo, and Wang 2019) versus our new mForms as QA approach."
|
| 75 |
+
},
|
| 76 |
+
"3": {
|
| 77 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx3.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx3.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"Sx3.T3.1.1.1.1\"># train samples</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T3.1.1.1.2\">0</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T3.1.1.1.3\">50</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T3.1.1.1.4\">100</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T3.1.1.1.5\">500</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T3.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"Sx3.T3.1.2.1.1\">No Visuals</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T3.1.2.1.2\">0.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T3.1.2.1.3\">0.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T3.1.2.1.4\">0.32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T3.1.2.1.5\">0.71</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T3.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"Sx3.T3.1.3.2.1\">Text Visuals</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T3.1.3.2.2\">0.36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T3.1.3.2.3\">0.69</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"Sx3.T3.1.3.2.4\">0.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx3.T3.1.3.2.5\">0.88</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T3.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"Sx3.T3.1.4.3.1\">GUI (all) Visuals</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T3.1.4.3.2\">0.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T3.1.4.3.3\">0.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T3.1.4.3.4\">0.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T3.1.4.3.5\">0.87</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>F1 results showing effects of visual semantics on the Vehicle Logger App. The row labeled Text Visuals shows the results of our mForms as QA method with every visual element treated as a simple text field. GUI (all) Visuals leverage the full semantic information contained in the visual GUI elements for mForms as QA.</figcaption>\n</figure>",
|
| 78 |
+
"capture": "Table 3: F1 results showing effects of visual semantics on the Vehicle Logger App. The row labeled Text Visuals shows the results of our mForms as QA method with every visual element treated as a simple text field. GUI (all) Visuals leverage the full semantic information contained in the visual GUI elements for mForms as QA."
|
| 79 |
+
},
|
| 80 |
+
"4": {
|
| 81 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx3.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx3.T4.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T4.1.1.1.1\"># train samples</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T4.1.1.1.2\">0</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T4.1.1.1.3\">5</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T4.1.1.1.4\">100</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T4.1.1.1.5\">500</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T4.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"Sx3.T4.1.2.1.1\">Vehicle Logger</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T4.1.2.1.2\">0.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T4.1.2.1.3\">0.46</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T4.1.2.1.4\">0.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T4.1.2.1.5\">0.87</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T4.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"Sx3.T4.1.3.2.1\">+ATIS form-filling</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T4.1.3.2.2\">0.52</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T4.1.3.2.3\">0.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T4.1.3.2.4\">0.80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T4.1.3.2.5\">0.89</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Results for multi-task training (MT) across multiple domains. The first row shows F1 scores for the Vehicle Logger dataset for various amounts of training data. The second row shows the effect of fine-training the SQuAD2 model with ATIS form-filling before training with the Vehicle Logger data.</figcaption>\n</figure>",
|
| 82 |
+
"capture": "Table 4: Results for multi-task training (MT) across multiple domains. The first row shows F1 scores for the Vehicle Logger dataset for various amounts of training data. The second row shows the effect of fine-training the SQuAD2 model with ATIS form-filling before training with the Vehicle Logger data."
|
| 83 |
+
},
|
| 84 |
+
"5": {
|
| 85 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx3.T5\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T5.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx3.T5.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T5.1.1.1.1\"># elements</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T5.1.1.1.2\">1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T5.1.1.1.3\">2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T5.1.1.1.4\">3</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx3.T5.1.1.1.5\">4</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx3.T5.1.1.1.6\">5</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T5.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"Sx3.T5.1.2.1.1\">Vehicle Logger</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T5.1.2.1.2\">0.52</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T5.1.2.1.3\">0.51</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T5.1.2.1.4\">0.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"Sx3.T5.1.2.1.5\">0.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx3.T5.1.2.1.6\">0.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T5.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"Sx3.T5.1.3.2.1\">ATIS form-filling</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T5.1.3.2.2\">0.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T5.1.3.2.3\">0.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T5.1.3.2.4\">0.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"Sx3.T5.1.3.2.5\">0.53</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx3.T5.1.3.2.6\">0.52</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Zero-Shot Slot F1 scores on the Vehicle Logger and ATIS form-filling datasets for varying numbers of visual elements shown to the user simultaneously</figcaption>\n</figure>",
|
| 86 |
+
"capture": "Table 5: Zero-Shot Slot F1 scores on the Vehicle Logger and ATIS form-filling datasets for varying numbers of visual elements shown to the user simultaneously"
|
| 87 |
+
},
|
| 88 |
+
"6": {
|
| 89 |
+
"table_html": "<figure class=\"ltx_table\" id=\"Sx5.T6\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx5.T6.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx5.T6.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"Sx5.T6.1.1.1.1\">Visual App</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx5.T6.1.1.1.2\">Slot descriptions</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx5.T6.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"Sx5.T6.1.2.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"Sx5.T6.1.2.1.1.1\">Vehicle Logger</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx5.T6.1.2.1.2\">fuel cost, fuel added, trip description, gps tracking, start logging, date, odometer value,</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx5.T6.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx5.T6.1.3.2.1\">trip type, entry, vehicle</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx5.T6.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"Sx5.T6.1.4.3.1\">United</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx5.T6.1.4.3.2\">arrival airport, departure airport, travel dates, search, switch/swap airports</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx5.T6.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"Sx5.T6.1.5.4.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"Sx5.T6.1.5.4.1.1\">ATIS form-filling</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx5.T6.1.5.4.2\">aircraft code, airline code, airline name, airport code, airport name, arrival date (relative),</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx5.T6.1.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx5.T6.1.6.5.1\">arrival date (day name), arrival date (day number), arrival date (month name), etc</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx5.T6.1.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r ltx_border_t\" id=\"Sx5.T6.1.7.6.1\">Trip Advisor</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"Sx5.T6.1.7.6.2\">number of beds, date range, filter by price, filter by rating, number of nights, number of people</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Slot schema / descriptions used for the mForms tagger for each domain</figcaption>\n</figure>",
|
| 90 |
+
"capture": "Table 6: Slot schema / descriptions used for the mForms tagger for each domain"
|
| 91 |
+
}
|
| 92 |
+
},
|
| 93 |
+
"image_paths": {
|
| 94 |
+
"1": {
|
| 95 |
+
"figure_path": "2011.12340v4_figure_1.png",
|
| 96 |
+
"caption": "Figure 1: An example semantic representation with domain, intent, and semantic slot annotations.",
|
| 97 |
+
"url": "http://arxiv.org/html/2011.12340v4/extracted/5491034/Images/nlu_problem.png"
|
| 98 |
+
},
|
| 99 |
+
"2": {
|
| 100 |
+
"figure_path": "2011.12340v4_figure_2.png",
|
| 101 |
+
"caption": "Figure 2: Semantically annotated mobile Graphical User Interface (GUI) using computer vision to identify 25 UI component categories, 197 text button concepts, and 99 icon classes (Liu et al. 2018)",
|
| 102 |
+
"url": "http://arxiv.org/html/2011.12340v4/x1.png"
|
| 103 |
+
},
|
| 104 |
+
"3": {
|
| 105 |
+
"figure_path": "2011.12340v4_figure_3.png",
|
| 106 |
+
"caption": "Figure 3: The mForms pipeline. The rule template uses the semantified UI to trigger a question template. The visual information of the GUI element drives the generation of the actual question. Then, using the user\u2019s request as evidence, the questions are answered to fill the form with the appropriate information.",
|
| 107 |
+
"url": "http://arxiv.org/html/2011.12340v4/extracted/5491034/Images/mforms_pipeline_Vehicle_logger.png"
|
| 108 |
+
},
|
| 109 |
+
"4": {
|
| 110 |
+
"figure_path": "2011.12340v4_figure_4.png",
|
| 111 |
+
"caption": "Figure 4: mForms as QA Approach",
|
| 112 |
+
"url": "http://arxiv.org/html/2011.12340v4/extracted/5491034/mFORMS_approach.png"
|
| 113 |
+
},
|
| 114 |
+
"5": {
|
| 115 |
+
"figure_path": "2011.12340v4_figure_5.png",
|
| 116 |
+
"caption": "Figure 5: Simulated ATIS form-filling fields translated to Questions",
|
| 117 |
+
"url": "http://arxiv.org/html/2011.12340v4/extracted/5491034/atis_descriptions.png"
|
| 118 |
+
},
|
| 119 |
+
"6": {
|
| 120 |
+
"figure_path": "2011.12340v4_figure_6.png",
|
| 121 |
+
"caption": "Figure 6: Vehicle Logger Application",
|
| 122 |
+
"url": "http://arxiv.org/html/2011.12340v4/extracted/5491034/Images/691_cropped.jpg"
|
| 123 |
+
},
|
| 124 |
+
"7": {
|
| 125 |
+
"figure_path": "2011.12340v4_figure_7.png",
|
| 126 |
+
"caption": "Figure 7: United flight search Application",
|
| 127 |
+
"url": "http://arxiv.org/html/2011.12340v4/extracted/5491034/Images/92_cropped.jpg"
|
| 128 |
+
},
|
| 129 |
+
"8": {
|
| 130 |
+
"figure_path": "2011.12340v4_figure_8.png",
|
| 131 |
+
"caption": "Figure 8: Trip Advisor Application",
|
| 132 |
+
"url": "http://arxiv.org/html/2011.12340v4/extracted/5491034/Images/2029.jpg"
|
| 133 |
+
},
|
| 134 |
+
"9": {
|
| 135 |
+
"figure_path": "2011.12340v4_figure_9.png",
|
| 136 |
+
"caption": "Figure 9: Computer Vision classification of GUI visual elements (Liu et al. 2018)",
|
| 137 |
+
"url": "http://arxiv.org/html/2011.12340v4/extracted/5491034/Images/CV_GUI.png"
|
| 138 |
+
}
|
| 139 |
+
},
|
| 140 |
+
"validation": true,
|
| 141 |
+
"references": [
|
| 142 |
+
{
|
| 143 |
+
"1": {
|
| 144 |
+
"title": "Sequential dialogue context modeling for spoken language understanding.",
|
| 145 |
+
"author": "Bapna, A.; Tur, G.; Hakkani-Tur, D.; and Heck, L. 2017a.",
|
| 146 |
+
"venue": "arXiv preprint arXiv:1705.03455 .",
|
| 147 |
+
"url": null
|
| 148 |
+
}
|
| 149 |
+
},
|
| 150 |
+
{
|
| 151 |
+
"2": {
|
| 152 |
+
"title": "Towards zero-shot frame semantic parsing for domain scaling.",
|
| 153 |
+
"author": "Bapna, A.; Tur, G.; Hakkani-Tur, D.; and Heck, L. 2017b.",
|
| 154 |
+
"venue": "arXiv preprint arXiv:1707.02363 .",
|
| 155 |
+
"url": null
|
| 156 |
+
}
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"3": {
|
| 160 |
+
"title": "Zero-Shot Transfer Learning with Synthesized Data for Multi-Domain Dialogue State Tracking.",
|
| 161 |
+
"author": "Campagna, G.; Foryciarz, A.; Moradshahi, M.; and S., L. M. 2020.",
|
| 162 |
+
"venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.",
|
| 163 |
+
"url": null
|
| 164 |
+
}
|
| 165 |
+
},
|
| 166 |
+
{
|
| 167 |
+
"4": {
|
| 168 |
+
"title": "BERT for Joint Intent Classification and Slot Filling.",
|
| 169 |
+
"author": "Chen, Q.; Zhuo, Z.; and Wang, W. 2019.",
|
| 170 |
+
"venue": "arXiv:1902.10909 .",
|
| 171 |
+
"url": null
|
| 172 |
+
}
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"5": {
|
| 176 |
+
"title": "Zero-shot learning for semantic utterance classification.",
|
| 177 |
+
"author": "Dauphin, Y. N.; Tur, G.; Hakkani-Tur, D.; and Heck, L. 2013.",
|
| 178 |
+
"venue": "arXiv preprint arXiv:1401.0509 .",
|
| 179 |
+
"url": null
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"6": {
|
| 184 |
+
"title": "Rico: A Mobile App Dataset for Building Data-Driven Design Applications.",
|
| 185 |
+
"author": "Deka, B.; Huang, Z.; Franzen, C.; Hibschman, J.; Afergan, D.; Li, Y.; Nichols, J.; and Kumar, R. 2017.",
|
| 186 |
+
"venue": "In Proceedings of the 30th Annual Symposium on User Interface Software and Technology, UIST \u201917.",
|
| 187 |
+
"url": null
|
| 188 |
+
}
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"7": {
|
| 192 |
+
"title": "QA-Driven Zero-shot Slot Filling with Weak Supervision Pretraining.",
|
| 193 |
+
"author": "Du, X.; He, L.; Li, Q.; Yu, D.; Pasupat, P.; and Zhang, Y. 2021.",
|
| 194 |
+
"venue": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), 654\u2013664. Online: Association for Computational Linguistics.",
|
| 195 |
+
"url": null
|
| 196 |
+
}
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"8": {
|
| 200 |
+
"title": "Extending domain coverage of language understanding systems via intent transfer between domains using knowledge graphs and search query click logs.",
|
| 201 |
+
"author": "El-Kahky, A.; Liu, X.; Sarikaya, R.; Tur, G.; Hakkani-Tur, D.; and Heck, L. 2014.",
|
| 202 |
+
"venue": "In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4067\u20134071. IEEE.",
|
| 203 |
+
"url": null
|
| 204 |
+
}
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"9": {
|
| 208 |
+
"title": "Improved and Efficient Conversational Slot Labeling through Question Answering.",
|
| 209 |
+
"author": "Fuisz, G.; Vuli\u0107, I.; Gibbons, S.; Casanueva, I.; and Budzianowski, P. 2022.",
|
| 210 |
+
"venue": "arXiv preprint arXiv:2204.02123 .",
|
| 211 |
+
"url": null
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"10": {
|
| 216 |
+
"title": "Eye gaze for spoken language understanding in multi-modal conversational interactions.",
|
| 217 |
+
"author": "Hakkani-T\u00fcr, D.; Slaney, M.; Celikyilmaz, A.; and Heck, L. 2014.",
|
| 218 |
+
"venue": "In Proceedings of the 16th International Conference on Multimodal Interaction, 263\u2013266.",
|
| 219 |
+
"url": null
|
| 220 |
+
}
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"11": {
|
| 224 |
+
"title": "Multi-Domain Joint Semantic Frame Parsing using Bi-directional RNN-LSTM.",
|
| 225 |
+
"author": "Hakkani-T\u00fcr, D.; Tur, G.; Celikyilmaz, A.; Chen, Y.-N. V.; Gao, J.; Deng, L.; and Wang, Y.-Y. 2016.",
|
| 226 |
+
"venue": "In Proceedings of The 17th Annual Meeting of the International Speech Communication Association (INTERSPEECH 2016). ISCA.",
|
| 227 |
+
"url": null
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"12": {
|
| 232 |
+
"title": "Multi-Modal Conversational Search and Browse.",
|
| 233 |
+
"author": "Heck, L.; Hakkani-T\u00fcr, D.; Chinthakunta, M.; Tur, G.; Iyer, R.; Parthasarathy, P.; Stifelman, L.; Shriberg, E.; and Fidler, A. 2013.",
|
| 234 |
+
"venue": "In Proceedings of the First Workshop on Speech, Language and Audio in Multimedia (SLAM 2013), 96\u2013101.",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"13": {
|
| 240 |
+
"title": "Zero-Shot Visual Slot Filling as Question Answering.",
|
| 241 |
+
"author": "Heck, L.; and Heck, S. 2020.",
|
| 242 |
+
"venue": "CoRR abs/2011.12340.",
|
| 243 |
+
"url": null
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"14": {
|
| 248 |
+
"title": "Domain adaptation of recurrent neural networks for natural language understanding.",
|
| 249 |
+
"author": "Jaech, A.; Heck, L.; and Ostendorf, M. 2016.",
|
| 250 |
+
"venue": "arXiv:1604.00117 .",
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"15": {
|
| 256 |
+
"title": "Federated control with hierarchical multi-agent deep reinforcement learning.",
|
| 257 |
+
"author": "Kumar, S.; Shah, P.; Hakkani-Tur, D.; and Heck, L. 2017.",
|
| 258 |
+
"venue": "In Conference on Neural Information Processing Systems (NeurIPS), Hierarchical Reinforcement Learning Workshop.",
|
| 259 |
+
"url": null
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"16": {
|
| 264 |
+
"title": "ALBERT: A Lite BERT for Self-supervised Learning of Language Representations.",
|
| 265 |
+
"author": "Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; and Soricut, R. 2019.",
|
| 266 |
+
"venue": "arXiv:1909.11942 .",
|
| 267 |
+
"url": null
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"17": {
|
| 272 |
+
"title": "Zero-Shot Relation Extraction via Reading Comprehension.",
|
| 273 |
+
"author": "Levy, O.; Seo, M.; Choi, E.; and Zettlemoyer, L. 2017.",
|
| 274 |
+
"venue": "CoRR abs/1706.04115.",
|
| 275 |
+
"url": null
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"18": {
|
| 280 |
+
"title": "RILOD: near real-time incremental learning for object detection at the edge.",
|
| 281 |
+
"author": "Li, D.; Tasci, S.; Ghosh, S.; Zhu, J.; Zhang, J.; and Heck, L. 2019.",
|
| 282 |
+
"venue": "In Proceedings of the 4th ACM/IEEE Symposium on Edge Computing, 113\u2013126.",
|
| 283 |
+
"url": null
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"19": {
|
| 288 |
+
"title": "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling.",
|
| 289 |
+
"author": "Liu, B.; and Lane, I. 2016.",
|
| 290 |
+
"venue": null,
|
| 291 |
+
"url": null
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"20": {
|
| 296 |
+
"title": "End-to-end optimization of task-oriented dialogue model with deep reinforcement learning.",
|
| 297 |
+
"author": "Liu, B.; Tur, G.; Hakkani-Tur, D.; Shah, P.; and Heck, L. 2017.",
|
| 298 |
+
"venue": "arXiv preprint arXiv:1711.10712 .",
|
| 299 |
+
"url": null
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"21": {
|
| 304 |
+
"title": "Learning Design Semantics for Mobile Apps.",
|
| 305 |
+
"author": "Liu, T. F.; Craft, M.; Situ, J.; Yumer, E.; Mech, R.; and Kumar, R. 2018.",
|
| 306 |
+
"venue": "In The 31st Annual ACM Symposium on User Interface Software and Technology, UIST \u201918, 569\u2013579.",
|
| 307 |
+
"url": null
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"22": {
|
| 312 |
+
"title": "Using recurrent neural networks for slot filling in spoken language understanding.",
|
| 313 |
+
"author": "Mesnil, G.; Dauphin, Y.; Yao, K.; Bengio, Y.; Deng, L.; Hakkani-Tur, D.; He, X.; Heck, L.; Tur, G.; Yu, D.; et al. 2014.",
|
| 314 |
+
"venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing 23(3): 530\u2013539.",
|
| 315 |
+
"url": null
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"23": {
|
| 320 |
+
"title": "Know What You Don\u2019t Know: Unanswerable Questions for SQuAD 784\u2013789.",
|
| 321 |
+
"author": "Rajpurkar, P.; Jia, R.; and Liang, P. 2018.",
|
| 322 |
+
"venue": "doi:10.18653/v1/P18-2124.",
|
| 323 |
+
"url": null
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"24": {
|
| 328 |
+
"title": "Cross-Modal Dense Passage Retrieval for Outside Knowledge Visual Question Answering.",
|
| 329 |
+
"author": "Reichman, B.; and Heck, L. 2023.",
|
| 330 |
+
"venue": "In 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2829\u20132834. Los Alamitos, CA, USA: IEEE Computer Society.",
|
| 331 |
+
"url": null
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"25": {
|
| 336 |
+
"title": "Outside Knowledge Visual Question Answering Version 2.0.",
|
| 337 |
+
"author": "Reichman, B. Z.; Sundar, A.; Richardson, C.; Zubatiy, T.; Chowdhury, P.; Shah, A.; Truxal, J.; Grimes, M.; Shah, D.; Chee, W. J.; Punjwani, S.; Jain, A.; and Heck, L. 2023.",
|
| 338 |
+
"venue": "In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1\u20135.",
|
| 339 |
+
"url": null
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"26": {
|
| 344 |
+
"title": "Taking a hint: Leveraging explanations to make vision and language models more grounded.",
|
| 345 |
+
"author": "Selvaraju, R. R.; Lee, S.; Shen, Y.; Jin, H.; Ghosh, S.; Heck, L.; Batra, D.; and Parikh, D. 2019.",
|
| 346 |
+
"venue": "In Proceedings of the IEEE International Conference on Computer Vision, 2591\u20132600.",
|
| 347 |
+
"url": null
|
| 348 |
+
}
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"27": {
|
| 352 |
+
"title": "Interactive reinforcement learning for task-oriented dialogue management.",
|
| 353 |
+
"author": "Shah, P.; Hakkani-T\u00fcr, D.; and Heck, L. 2016.",
|
| 354 |
+
"venue": "In Conference on Neural Information Processing Systems (NIPS), Workshop on Deep Learning for Action and Interaction.",
|
| 355 |
+
"url": null
|
| 356 |
+
}
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"28": {
|
| 360 |
+
"title": "gTBLS: Generating Tables from Text by Conditional Question Answering.",
|
| 361 |
+
"author": "Sundar, A.; Richardson, C.; and Heck, L. 2024.",
|
| 362 |
+
"venue": null,
|
| 363 |
+
"url": null
|
| 364 |
+
}
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"29": {
|
| 368 |
+
"title": "Multimodal Conversational AI A Survey of Datasets and Approaches.",
|
| 369 |
+
"author": "Sundar, A. S.; and Heck, L. 2022.",
|
| 370 |
+
"venue": "ACL 2022 131.",
|
| 371 |
+
"url": null
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"30": {
|
| 376 |
+
"title": "cTBLS: Augmenting Large Language Models with Conversational Tables.",
|
| 377 |
+
"author": "Sundar, A. S.; and Heck, L. 2023.",
|
| 378 |
+
"venue": "In Chen, Y.-N.; and Rastogi, A., eds., Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023), 59\u201370. Toronto, Canada: Association for Computational Linguistics.",
|
| 379 |
+
"url": null
|
| 380 |
+
}
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"31": {
|
| 384 |
+
"title": "Spoken Language Understanding: Systems for Extracting Semantic Information from Speech.",
|
| 385 |
+
"author": "Tur, G.; and De Mori, R., eds. 2011.",
|
| 386 |
+
"venue": "Wiley.",
|
| 387 |
+
"url": null
|
| 388 |
+
}
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"32": {
|
| 392 |
+
"title": "What is left to be understood in ATIS?",
|
| 393 |
+
"author": "Tur, G.; Hakkani-T\u00fcr, D.; and Heck, L. 2010.",
|
| 394 |
+
"venue": "In 2010 IEEE Spoken Language Technology Workshop, 19\u201324. IEEE.",
|
| 395 |
+
"url": null
|
| 396 |
+
}
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"33": {
|
| 400 |
+
"title": "(Almost) zero-shot cross-lingual spoken language understanding.",
|
| 401 |
+
"author": "Upadhyay, S.; Faruqui, M.; Tur, G.; Dilek, H.-T.; and Heck, L. 2018.",
|
| 402 |
+
"venue": "In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6034\u20136038. IEEE.",
|
| 403 |
+
"url": null
|
| 404 |
+
}
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"34": {
|
| 408 |
+
"title": "Grounding Open-Domain Instructions to Automate Web Support Tasks.",
|
| 409 |
+
"author": "Xu, N.; Masling, S.; Du, M.; Campagna, G.; Heck, L.; Landay, J.; and Lam, M. 2021.",
|
| 410 |
+
"venue": "In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1022\u20131032.",
|
| 411 |
+
"url": null
|
| 412 |
+
}
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"35": {
|
| 416 |
+
"title": "AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data.",
|
| 417 |
+
"author": "Xu, S. X.; Semnani, S. J.; Campagna, G.; and Lam, M. S. 2020.",
|
| 418 |
+
"venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing.",
|
| 419 |
+
"url": null
|
| 420 |
+
}
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"36": {
|
| 424 |
+
"title": "Zero-shot learning and clustering for semantic utterance classification using deep learning.",
|
| 425 |
+
"author": "Yann, D.; Tur, G.; Hakkani-Tur, D.; and Heck, L. 2014.",
|
| 426 |
+
"venue": "In International Conference on Learning Representations (cited on page 28).",
|
| 427 |
+
"url": null
|
| 428 |
+
}
|
| 429 |
+
},
|
| 430 |
+
{
|
| 431 |
+
"37": {
|
| 432 |
+
"title": "Generative visual dialogue system via weighted likelihood estimation.",
|
| 433 |
+
"author": "Zhang, H.; Ghosh, S.; Heck, L.; Walsh, S.; Zhang, J.; Zhang, J.; and Kuo, C.-C. J. 2019.",
|
| 434 |
+
"venue": "In Proceedings of the 28th International Joint Conference on Artificial Intelligence, 1025\u20131031.",
|
| 435 |
+
"url": null
|
| 436 |
+
}
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"38": {
|
| 440 |
+
"title": "Class-incremental learning via deep model consolidation.",
|
| 441 |
+
"author": "Zhang, J.; Zhang, J.; Ghosh, S.; Li, D.; Tasci, S.; Heck, L.; Zhang, H.; and Kuo, C.-C. J. 2020.",
|
| 442 |
+
"venue": "In The IEEE Winter Conference on Applications of Computer Vision, 1131\u20131140.",
|
| 443 |
+
"url": null
|
| 444 |
+
}
|
| 445 |
+
}
|
| 446 |
+
],
|
| 447 |
+
"url": "http://arxiv.org/html/2011.12340v4"
|
| 448 |
+
}
|
20240323/2110.10494v2.json
ADDED
|
@@ -0,0 +1,263 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Deep Point Cloud Normal Estimation via Triplet Learning",
|
| 3 |
+
"abstract": "Current normal estimation methods for 3D point clouds often show limited accuracy in predicting normals at sharp features (e.g., edges and corners) and less robustness to noise. In this paper, we propose a novel normal estimation method for point clouds which consists of two phases: (a) feature encoding to learn representations of local patches, and (b) normal estimation that takes the learned representation as input and regresses the normal vector. We are motivated that local patches on isotropic and anisotropic surfaces respectively have similar and distinct normals, and these separable features or representations can be learned to facilitate normal estimation. To realise this, we design a triplet learning network for feature encoding and a normal estimation network to regress normals. Despite having a smaller network size compared with most other methods, experiments show that our method preserves sharp features and achieves better normal estimation results especially on computer-aided design (CAD) shapes.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Point cloud data is a representation of 3D geometry which has been applied in a wide range of fields, including autonomous driving, robotics and augmented reality [1 ###reference_b1###, 2 ###reference_b2###]. Raw point clouds consist of unordered points that lack normal information, and are usually corrupted by noise due to limitations on the scanning devices\u2019 precision [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###]. This makes it challenging to directly use raw point clouds for visual computing tasks such as surface reconstruction, shape smoothing and segmentation [1 ###reference_b1###, 5 ###reference_b5###]. Estimating reliable normal information on the input point clouds has proven to be significant in achieving satisfactory results in such tasks.\nConventional point cloud normal estimation methods based on principal component analysis (PCA) [7 ###reference_b7###] have limited accuracy in estimating normals at sharp features and are sensitive to noise. In recent years, with the development of deep learning for point clouds, learning-based normal estimation methods have been proposed to improve performance in this area, but there are still limitations among them. For example, the methods proposed in [8 ###reference_b8###, 4 ###reference_b4###, 9 ###reference_b9###] do not perform well in the presence of noise and in predicting normals at sharp features. The work presented in [3 ###reference_b3###] shows more robust performance on noisy inputs yet consists of a two-step testing phase that requires a long time to produce results. Nesti-Net [5 ###reference_b5###], while being a promising candidate, has a large network size and lengthy inference time.\nIn this paper, we introduce a novel normal estimation method for point clouds with triplet learning to address those limitations. We are motivated to exploit a triplet learning network that brings representations (or features) of similar point cloud patches close to each other, and pushes representations of dissimilar patches away from each other, which facilitates effective normal estimation. We employ PointNet [6 ###reference_b6###] as the feature-extraction backbone and treat the local neighbourhood of each point (i.e., central point) as a local patch. To define a triplet, we first select a local patch as the anchor, and regard another local patch with a small angle difference between their central point normals as the positive sample, and the third patch with a large angle difference as the negative sample. We derive this based on the fact that two local patches on the same isotropic surface should have similar central normals and representations. This phase learns patch features which will be used as input for the normal estimation phase. We design multilayer perceptrons (MLPs) as the normal estimation network and use it to regress the normal vector from the encoded features of a local patch, trained by our designed cosine similarity loss function.\nOur training and testing point cloud datasets consist of both CAD and non-CAD synthetic shapes, where CAD shapes contain sharp features while non-CAD shapes generally have smoother surfaces. Experiments show that our method achieves better results than other normal estimation techniques especially on noisy CAD shapes, while comparable performance is gained on smoother non-CAD shapes. Our method has several advantages: (a) it performs very well on noisy CAD shapes (both synthetic and scanned) in terms of preserving sharp features, (b) it is robust to irregular sampling and fewer points, (c) it has a small network size of 10.42 MB and it completes normal estimation within a short amount of time (i.e., 55.6 seconds per 100,000 points), and (d) the single trained network can estimate normals for both CAD and non-CAD shapes."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Normal Estimation for Point Clouds",
|
| 21 |
+
"text": "As a commonly adopted traditional normal estimation approach, PCA [7 ###reference_b7###] derives each point\u2019s normal by calculating the eigenvalues and eigenvectors of the covariance matrix of its neighbourhood. Nevertheless, PCA-based method and its variants tend to smooth out sharp edges and corners that exist in the input shapes. In recent years, learning-based normal estimation methods have started to emerge, but only a few of these methods showcase the ability to preserve sharp features, handle noise robustly and generalise among different shapes. As the pioneer of such approaches, Boulch and Marlet [8 ###reference_b8###] present HoughCNN, in which 3D point clouds are mapped to a 2D Hough space and, thereafter, a convolutional neural network (CNN) performs normal estimation on this representation. However, this transformation to the 2D space may discard important geometrical details. After the introduction of PointNet [6 ###reference_b6###], 3D points can be directly fed into networks and the feature representations can be directly extracted without being affected by the input sequence. Adopting this as the backbone, PCPNet [4 ###reference_b4###] encodes local neighbourhoods on point clouds as patches, and regresses normals from them. Nesti-Net [5 ###reference_b5###] adopts a mixture-of-experts network that checks points\u2019 neighbourhoods in varying scales and selects the optimal one. Similarly, NINormal [9 ###reference_b9###] adopts an attention module that softly selects neighbouring points to regress normals from them. To effectively preserve feature normals, Lu et al. [3 ###reference_b3###] classify points into feature and non-feature classes, and perform denoising on noisy point clouds in conjunction with normal estimation in their Deep Feature Preserving (DFP) method. Recently, Lenssen et al. [10 ###reference_b10###] proposed Deep Iterative (DI), a graph neural network-based approach that performs a weighted least squares plane-fitting of point neighbourhoods, where the weights are iteratively refined during the process."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Triplet Loss",
|
| 27 |
+
"text": "The triplet loss function, which focuses on optimizing the relative distances of an anchor to a positive and negative sample, has been applied to a broad range of contexts. Traditionally, triplet loss was mainly applied to 2D image processing tasks, such as image learning [11 ###reference_b11###], face identification [12 ###reference_b12###] and person matching [13 ###reference_b13###]. Extending from 2D, triplet loss has also been employed for 3D geometric data processing in recent years. For example, [2 ###reference_b2###] and [14 ###reference_b14###] use triplet loss for point cloud matching, and [15 ###reference_b15###] employs unsupervised triplet loss for point cloud classification. While triplet loss has proven to be effective for feature-learning tasks on 3D data, it has never been extended to normal estimation on 3D point clouds."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Method",
|
| 33 |
+
"text": "Our normal estimation approach consists of two phases: (a) feature encoding that learns geometric features from input point clouds, and (b) normal estimation that regresses normals from the encoded features. In phase (a), we first define the local patch as a point (i.e., central point) with its neighbouring points, and conduct pre-processing to mitigate pose and point number inconsistencies. We then construct triplets of patches and feed them to the feature encoding network (i.e., a triplet network). In phase (b), the latent representations learned by phase (a) are consumed by the normal estimation network which outputs a predicted normal for the given patch (i.e., normal for the central point of that patch). Note that a rotation matrix is used in phase (a) for patch alignment, and its inverse matrix is further used in phase (b) for recovering back to the original space. Fig. 1 ###reference_### shows the overall architecture of our method, which is trained in a supervised manner.\n###figure_1###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Feature Learning",
|
| 39 |
+
"text": "Patch Pre-processing. Before training, we pre-process each point cloud into patches as inputs for our model. For any point cloud , we define a local patch centered at a point as:\nwhere is any point within the patch and is the radius of the ball centered at point .\nThere are two issues associated with raw patches, making them unsuitable for effective learning: (a) they contain unnecessary degrees of freedom and rigid transformations, and (b) each patch\u2019s number of points might vary, which prohibits them from being grouped into input batches during the training phase [4 ###reference_b4###, 1 ###reference_b1###]. To address (a), we first translate each patch to its origin and normalise its size, such that . Inspired by [1 ###reference_b1###], we then align each patch\u2019s last principal axis with the -axis and the second principal axis with the -axis in Cartesian space using a rotation matrix , computed using the principal components of the patch\u2019s covariance matrix. As a result, the unnecessary degrees of freedom are removed, leaving the patch invariant under rigid transformations; the inverse rotation matrix is required later, during testing. To solve (b), we select points from each patch to ensure the input patches are of a consistent size. For raw patches with more points than , we randomly screen points; otherwise, we pad the input using existing points within the patch to make up points. We empirically set to be 500 and the ball radius to be 5% of the point cloud\u2019s bounding box diagonal.\nTriplet Generation. As defined in [12 ###reference_b12###], each data sample is treated as an anchor and needs to be paired with a positive and a negative sample in order to construct a triplet. We treat each input patch as an anchor patch , which is paired with a positive and negative patch. We denote the ground-truth normal of the anchor patch\u2019s central point, , by . Thereafter, we define a positive patch and negative patch according to the anchor, as:\nwhere and are respectively the central points of and , and and are their corresponding ground-truth normals. is the angle threshold used for identifying positive and negative patches, which is empirically set to 20 degrees.\nTo ensure the feature encoder network can effectively learn sharp features such as edges on the input point clouds, we choose and as close to as possible. To do so, we start by trying to locate and within of , and then gradually increase the search radius if either target cannot be found. By doing so, is more likely to lie on the same isotropic surface as , while is more likely to lie on an adjacent surface without being too far from the edge. To ensure the consistency of each triplet, the anchor, positive and negative patches within each triplet are chosen from the same point cloud. The rotation matrix of is then applied to and as well, to ensure the network learns consistently within each triplet.\nTriplet Loss. Since input patch points are not organised in a specific order, the learning result may be affected by different permutations of the same set of points [4 ###reference_b4###]. To address this issue, we utilise an architecture which comprises of several MLPs and a max-pooling layer, inspired by PointNet [6 ###reference_b6###], as the backbone of our encoder. Within each triplet, we perform feature extraction by feeding each patch (a 500 3 vector) into the network, which aggregates the features as a 1024-dimensional latent vector using the max-pooling function. We perform the same operation for , and . The loss for the encoded triplet is given by:\nwhere , and are the latent representations of patch , and , respectively. Here, is the encoder. We empirically set the margin . For pairwise distances of latent vectors, we use the -norm regularisation during our training procedure. Intuitively, patches lying on isotropic and anisotropic surfaces should have small and large angles between their central point normals, respectively. Therefore, the representations of patches on an isotropic surface should be similar while patches of an anisotropic surface should be distinct. Triplet learning accomplishes this by bringing representations of patches on an isotropic surface closer while pushing apart those from an anisotropic surface, and the process is demonstrated in Fig. 2 ###reference_###.\n###figure_2###"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Normal Estimation",
|
| 45 |
+
"text": "Next we aim to estimate the patch normal (i.e., the normal of the central point) from each patch\u2019s latent representation. This is accomplished by the normal estimation network which simply involves several MLPs. To train the normal estimation network, we firstly sample a patch and pre-process it in the same way as Sec. 3.1 ###reference_###. Then, we feed the patch into the feature encoder to obtain its 1 1024 latent representation which then becomes the input for the normal estimation network. The output is a 1 3 normal vector.\nAs for the loss function for normal estimation, we define the cosine similarity between the estimated normal, , and ground-truth normals within the patch, . We set the exponent of the cosine of the angle to 8, for the sake of preserving sharp features. Inspired by [1 ###reference_b1###], we also introduce a weighting scheme that takes into consideration the cosine similarity of the ground-truth patch\u2019s central point normal and the neighbouring ground-truth normals. Therefore, the weighted loss function for the normal estimation network and its weight function are given by:\nHere, are points in , is the patch\u2019s central point, and is the weighting function based on the ground-truth patch\u2019s central point normal and the normals of neighbouring points. is the support angle which is set to 15 degrees by default. Since the input patch is rotated by the rotation matrix , we multiply the predicted normal by the inverse matrix to get the final normal for the patch\u2019s central point in the original space."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Experimental Results",
|
| 51 |
+
"text": ""
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Dataset",
|
| 57 |
+
"text": "Our training dataset consists of 22 clean shapes: 11 CAD shapes and 11 non-CAD shapes (Fig. 3 ###reference_###(a)). Each clean shape contains 100,000 points along with their ground-truth normals that are sampled from the shape\u2019s ground-truth surfaces. To ensure the network is trained equally on both types of shapes, we provide the same amount of training data from both CAD and non-CAD shapes. In addition, we also add noisy variants of each clean shape to the training dataset to improve the robustness of our network to noise. Each clean shape has 5 variants with different levels of Gaussian noise (0.25%, 0.5%, 1%, 1.5% and 2.5% of each clean shape\u2019s bounding box diagonal length, respectively). Therefore, we have 132 (22 6) training shapes in total.\nIn both the feature encoding and normal estimation training stages, we sample 8,000 patches from each shape to ensure the network generalises well on sufficient data. We validate our network model on the validation set (Fig. 3 ###reference_###(b)), and test on 12 synthetic shapes (Fig. 3 ###reference_###(c)) with various noise levels, as well as additional raw scanned point clouds.\n###figure_3### ###figure_4###"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Implementation Details",
|
| 63 |
+
"text": "We implement our networks using PyTorch 1.8.0, train and test them on an NVIDIA GeForce RTX 3080 GPU with CUDA 11.3. Both feature encoding and normal estimation networks are trained using an SGD optimiser, with a momentum of 0.9 and an initial learning rate of 0.01. We train the feature encoding network for 5 epochs since it converges quickly, and train the normal estimation network for 50 epochs. During training, the learning rate is multiplied by a 0.1 factor if no improvement happens within 3 consecutive epochs.\n###figure_5###"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.3",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Comparisons",
|
| 69 |
+
"text": "We compare our normal estimation results on the test dataset with PCA [7 ###reference_b7###], HoughCNN [8 ###reference_b8###], PCPNet [4 ###reference_b4###], Nesti-Net [5 ###reference_b5###], NINormal [9 ###reference_b9###], DI [10 ###reference_b10###], and additionally, DFP [3 ###reference_b3###]. During the comparison study, we retrained all methods, except for HoughCNN and DFP, on our training dataset and used recommended values of parameters for each respective method during testing, to ensure fair comparisons. HoughCNN creates synthetic point set data and computes corresponding accumulators while DFP has a point classification step requiring labelled ground-truth points, indicating they cannot be trained on our dataset. Also, methods such as [8 ###reference_b8###, 5 ###reference_b5###, 9 ###reference_b9###] sometimes produce normals with wrong orientations, and the results are not normalised. Therefore, we flip and normalise the affected normals to guarantee fair comparisons. We use mean squared angular error (MSAE) [3 ###reference_b3###] for evaluating the accuracy of the predicted normals against the ground-truth ones. Table 1 ###reference_### shows MSAE values for the different methods over the testing shapes, where our method achieves the minimum overall MSAE.\nSynthetic Point Clouds. Fig. 4 ###reference_###(a) and Fig. 4 ###reference_###(b) demonstrate results on 6 CAD shapes with 0.5% and 1% random vertex displacement noise, added using MeshLab [16 ###reference_b16###], respectively. Our method outperforms all other methods on Tetrahedron and Pyramid (the third and fourth rows), and ranks the second on Cube, Fandisk and Cylinder (the first, second and sixth rows). While our method generates outputs with less obvious gains on non-CAD shapes since it tends to over-sharpen features, it still produces the minimum MSAE among all testing shapes overall, as shown in Table 1 ###reference_###.\nWe also demonstrate comparison results with DFP [3 ###reference_b3###] which has two separately trained models for feature points and non-feature points. For fair comparisons, we use normal estimation results from the first iteration of DFP since it optimises point positions in further iterations. As can be seen from Fig. 5 ###reference_###, our method outperforms DFP on 2 noisy CAD shapes with 1% random vertex displacement noise in terms of MSAE and inference time: DFP takes 1.69 hours to process every 10,000 points, while ours only requires 5.56 seconds. Since the two shapes in Fig. 5 ###reference_### have a much smaller number of points in them (10,171 points for Tetrahedron and 16,747 points for Cube), it also demonstrates the robustness of our method on less dense point clouds.\nScanned Point Clouds. We also test the effectiveness and scalability of our method on scanned point clouds, which are generally incomplete and consist of uneven point distributions compared to synthetic ones. Fig. 4 ###reference_###(c) demonstrates comparison results on 4 raw scanned point clouds of CAD shapes, where the points are corrupted by noise and their ground-truth normals are known. As shown in Fig. 4 ###reference_###(c), most methods\u2019 results are suboptimal; while Nesti-Net performs relatively well on flat surfaces, there are many predicted normals at the edges that point to wrong directions, increasing the overall angular error. Among all methods, our approach achieves the minimum MSAE on the 4 raw scanned shapes.\nNetwork Size and Inference Time. It is worth noting that our method achieves the state-of-the-art performance with a relatively small network size and estimates normals at a relatively faster speed. Table 2 ###reference_### lists the comparison of all network sizes and inference time for all methods. Although Nesti-Net slightly outperforms our results on 4 CAD shapes listed in Fig. 4 ###reference_###, it utilises the largest network. The most lightweight method, DI, and the fastest method, NINormal, both show high sensitivity to noise.\nIn addition, our method is more robust to irregular sampling and fewer points. Please refer to our supplementary document for additional results."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.4",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "Ablation Study",
|
| 75 |
+
"text": "Normal Estimation. During the normal estimation phase, we introduce a cosine similarity exponent of 8 in Eq. (5 ###reference_###) as it effectively preserves sharp features on noisy shapes compared to others. We evaluate this over our validation set\nusing different exponents. As shown in Table 3 ###reference_###, the average MSAE value is the minimum when the exponent is set to 8.\n###figure_6### Feature Encoding Network. Alternatively, we can directly regress the normal vector without our feature encoding network. As illustrated in Fig. 6 ###reference_###, utilising triplet loss to train our feature encoding network is effective in maintaining sharp edges and thus achieves a smaller MSAE value."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "Conclusion",
|
| 81 |
+
"text": "In this paper, we presented a novel deep learning normal estimation method for 3D point clouds. In its feature encoding phase, the feature encoder creates latent representations of local patches through optimising relative distances within triplets. In the normal estimation phase, these representations are consumed by the normal estimation network which regresses the normals for the central points in the patches using MLPs. Comparison results with other representative methods have demonstrated that our method has achieved the state-of-the-art performance in estimating normals in the presence of noise, especially for sharp features."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [],
|
| 85 |
+
"tables": {
|
| 86 |
+
"1": {
|
| 87 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.1\">Table 1</span>: </span>Average MSAE of each method on the testing shapes.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.3\" style=\"width:323.3pt;height:68.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-8.5pt,1.8pt) scale(0.95,0.95) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.1.1.1\">Method</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.1.1.2\">PCA</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.1.1.3\">HoughCNN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.1.1.4\">PCPNet</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.1.1.5\">Nesti-Net</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.1.1.6\">NINormal</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.1.1.7\">DI</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.1.1.8\">Ours</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.3.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.2.1.1\">CAD Shapes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.2.1.2\">0.3068</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.2.1.3\">0.1043</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.2.1.4\">0.0297</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.2.1.5\">0.0249</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.2.1.6\">0.0599</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.2.1.7\">0.0341</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.2.1.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.1.2.1.8.1\">0.0165</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T1.3.1.3.2.1\">Non-CAD Shapes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.3.1.3.2.2\">0.4582</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.3.1.3.2.3\">0.2747</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.3.1.3.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.1.3.2.4.1\">0.2439</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.3.1.3.2.5\">0.2539</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.3.1.3.2.6\">0.2510</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.3.1.3.2.7\">1.0535</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.3.1.3.2.8\">0.2636</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.1.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.4.3.1\">Overall</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.4.3.2\">0.3636</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.4.3.3\">0.1682</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.4.3.4\">0.1100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.4.3.5\">0.1108</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.4.3.6\">0.1316</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.4.3.7\">0.4164</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.3.1.4.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.3.1.4.3.8.1\">0.1092</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 88 |
+
"capture": "Table 1: Average MSAE of each method on the testing shapes."
|
| 89 |
+
},
|
| 90 |
+
"2": {
|
| 91 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.1.1\">Table 2</span>: </span>Network size comparison among different approaches and inference time comparison (seconds per 100K points).</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.3\" style=\"width:361.7pt;height:51.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-9.5pt,1.4pt) scale(0.95,0.95) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.1.1\">Method Name</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.1.2\">HoughCNN</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.1.3\">PCPNet</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.1.4\">Nesti-Net</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.1.5\">NINormal</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.1.6\">DI</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.1.7\">DFP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.1.1.8\">Ours</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.2.1.1\">Network Size (in MB)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.2.1.2\">111.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.2.1.3\">85.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.2.1.4\">2020.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.2.1.5\">39.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.2.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.1.2.1.6.1\">0.037</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.2.1.7\">268.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.1.2.1.8\">10.42</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T2.3.1.3.2.1\">Inference Time (in Seconds)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.3.1.3.2.2\">74.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.3.1.3.2.3\">227.33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.3.1.3.2.4\">1234.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.3.1.3.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.1.3.2.5.1\">2.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.3.1.3.2.6\">3.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.3.1.3.2.7\">60840.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.3.1.3.2.8\">55.6</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 92 |
+
"capture": "Table 2: Network size comparison among different approaches and inference time comparison (seconds per 100K points)."
|
| 93 |
+
},
|
| 94 |
+
"3": {
|
| 95 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.1.1\">Table 3</span>: </span>The MSAE of the shapes in our validation set regarding the choice of the exponent in Eq.\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2110.10494v2#S3.E5\" title=\"5 \u2023 3.2 Normal Estimation \u2023 3 Method \u2023 Deep Point Cloud Normal Estimation via Triplet Learning\"><span class=\"ltx_text ltx_ref_tag\">5</span></a>).</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.3\" style=\"width:160.9pt;height:85.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-4.2pt,2.3pt) scale(0.95,0.95) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.1.1.1\">Exponent</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.1.1.2\">2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.1.1.3\">4</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.1.1.4\">8</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.1.1.5\">12</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.2.1.1\">Box-Groove</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.2.1.2\">0.0149</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.2.1.3\">0.0101</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.2.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.2.1.4.1\">0.0097</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.2.1.5\">0.0145</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T3.3.1.3.2.1\">Carter</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.3.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.3.2.2.1\">0.1285</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.3.2.3\">0.1320</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.3.2.4\">0.1302</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.3.2.5\">0.1472</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S4.T3.3.1.4.3.1\">David</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.4.3.2\">0.1040</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.4.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.4.3.3.1\">0.1031</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.4.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.4.3.4.1\">0.1031</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.3.1.4.3.5\">0.1279</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.5.4.1\">Average</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.5.4.2\">0.0825</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.5.4.3\">0.0817</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.5.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.3.1.5.4.4.1\">0.0810</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.3.1.5.4.5\">0.0965</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 96 |
+
"capture": "Table 3: The MSAE of the shapes in our validation set regarding the choice of the exponent in Eq.\u00a0(5)."
|
| 97 |
+
}
|
| 98 |
+
},
|
| 99 |
+
"image_paths": {
|
| 100 |
+
"1": {
|
| 101 |
+
"figure_path": "2110.10494v2_figure_1.png",
|
| 102 |
+
"caption": "Fig. 1: The overall architecture of our method, where the patch size N is empirically set to 500.",
|
| 103 |
+
"url": "http://arxiv.org/html/2110.10494v2/extracted/5490570/Figures/pipeline_final_revised.png"
|
| 104 |
+
},
|
| 105 |
+
"2": {
|
| 106 |
+
"figure_path": "2110.10494v2_figure_2.png",
|
| 107 |
+
"caption": "Fig. 2: Illustration of triplet-based learning. The anchor, positive and negative patches are marked in dark blue, green and red respectively.",
|
| 108 |
+
"url": "http://arxiv.org/html/2110.10494v2/extracted/5490570/Figures/feature_encoding_new.png"
|
| 109 |
+
},
|
| 110 |
+
"3": {
|
| 111 |
+
"figure_path": "2110.10494v2_figure_3.png",
|
| 112 |
+
"caption": "Fig. 3: Demonstration of synthetic shapes where: (a) for training; (b) for validation and (c) for testing.",
|
| 113 |
+
"url": "http://arxiv.org/html/2110.10494v2/extracted/5490570/Figures/train_val_test_new.png"
|
| 114 |
+
},
|
| 115 |
+
"4": {
|
| 116 |
+
"figure_path": "2110.10494v2_figure_4.png",
|
| 117 |
+
"caption": "Fig. 4: MSAE comparisons on noisy CAD shapes, where (a) and (b) demonstrate MSAE on synthetic shapes, and (c) demonstrates MSAE on scanned shapes with known ground-truth normals. The two best results are in bold in each row.",
|
| 118 |
+
"url": "http://arxiv.org/html/2110.10494v2/extracted/5490570/Figures/cad_msae_2_bold_new-min.png"
|
| 119 |
+
},
|
| 120 |
+
"5": {
|
| 121 |
+
"figure_path": "2110.10494v2_figure_5.png",
|
| 122 |
+
"caption": "Fig. 5: MSAE comparisons with DFP on Tetrahedron and Cube with 1% noise.",
|
| 123 |
+
"url": "http://arxiv.org/html/2110.10494v2/extracted/5490570/Figures/dfp_compare_new.png"
|
| 124 |
+
},
|
| 125 |
+
"6": {
|
| 126 |
+
"figure_path": "2110.10494v2_figure_6.png",
|
| 127 |
+
"caption": "Fig. 6: Visualised normals and MSAE (a, c) without, and (b, d) with triplet learning-based feature encoding network.",
|
| 128 |
+
"url": "http://arxiv.org/html/2110.10494v2/extracted/5490570/Figures/ablation_shape_new.png"
|
| 129 |
+
}
|
| 130 |
+
},
|
| 131 |
+
"validation": true,
|
| 132 |
+
"references": [
|
| 133 |
+
{
|
| 134 |
+
"1": {
|
| 135 |
+
"title": "\u201cPointfilter: Point cloud filtering via encoder-decoder modeling,\u201d",
|
| 136 |
+
"author": "Dongbo Zhang, Xuequan Lu, Hong Qin, and Ying He,",
|
| 137 |
+
"venue": "IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 3, pp. 2015\u20132027, 2021.",
|
| 138 |
+
"url": null
|
| 139 |
+
}
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"2": {
|
| 143 |
+
"title": "\u201c3DFeat-Net: Weakly supervised local 3D features for point cloud registration,\u201d",
|
| 144 |
+
"author": "Zi Jian Yew and Gim Hee Lee,",
|
| 145 |
+
"venue": "in Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XV, Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss, Eds. 2018, vol. 11219 of Lecture Notes in Computer Science, pp. 630\u2013646, Springer.",
|
| 146 |
+
"url": null
|
| 147 |
+
}
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"3": {
|
| 151 |
+
"title": "\u201cDeep feature-preserving normal estimation for point cloud filtering,\u201d",
|
| 152 |
+
"author": "Dening Lu, Xuequan Lu, Yangxing Sun, and Jun Wang,",
|
| 153 |
+
"venue": "Computer-Aided Design, vol. 125, pp. 102860, 2020.",
|
| 154 |
+
"url": null
|
| 155 |
+
}
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"4": {
|
| 159 |
+
"title": "\u201cPCPNet: Learning Local Shape Properties from Raw Point Clouds,\u201d",
|
| 160 |
+
"author": "Paul Guerrero, Yanir Kleiman, Maks Ovsjanikov, and Niloy J. Mitra,",
|
| 161 |
+
"venue": "Computer Graphics Forum, vol. 37, no. 2, pp. 75\u201385, 2018.",
|
| 162 |
+
"url": null
|
| 163 |
+
}
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"5": {
|
| 167 |
+
"title": "\u201cNesti-Net: Normal Estimation for Unstructured 3D Point Clouds Using Convolutional Neural Networks,\u201d",
|
| 168 |
+
"author": "Yizhak Ben-Shabat, Michael Lindenbaum, and Anath Fischer,",
|
| 169 |
+
"venue": "in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 10104\u201310112.",
|
| 170 |
+
"url": null
|
| 171 |
+
}
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"6": {
|
| 175 |
+
"title": "\u201cPointNet: Deep learning on point sets for 3D classification and segmentation,\u201d",
|
| 176 |
+
"author": "Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas,",
|
| 177 |
+
"venue": "in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.",
|
| 178 |
+
"url": null
|
| 179 |
+
}
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"7": {
|
| 183 |
+
"title": "\u201cSurface reconstruction from unorganized points,\u201d",
|
| 184 |
+
"author": "Hugues Hoppe, Tony DeRose, Tom Duchamp, John McDonald, and Werner Stuetzle,",
|
| 185 |
+
"venue": "SIGGRAPH Comput. Graph., vol. 26, no. 2, pp. 71\u201378, July 1992.",
|
| 186 |
+
"url": null
|
| 187 |
+
}
|
| 188 |
+
},
|
| 189 |
+
{
|
| 190 |
+
"8": {
|
| 191 |
+
"title": "\u201cDeep learning for robust normal estimation in unstructured point clouds,\u201d",
|
| 192 |
+
"author": "Alexandre Boulch and Renaud Marlet,",
|
| 193 |
+
"venue": "Computer Graphics Forum, vol. 35, no. 5, pp. 281\u2013290, 2016.",
|
| 194 |
+
"url": null
|
| 195 |
+
}
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"9": {
|
| 199 |
+
"title": "\u201cNeighbourhood-insensitive point cloud normal estimation network,\u201d",
|
| 200 |
+
"author": "Zirui Wang and Victor Prisacariu,",
|
| 201 |
+
"venue": "in 31st British Machine Vision Conference 2020, BMVC 2020, Virtual Event, UK, September 7-10, 2020. 2020, BMVA Press.",
|
| 202 |
+
"url": null
|
| 203 |
+
}
|
| 204 |
+
},
|
| 205 |
+
{
|
| 206 |
+
"10": {
|
| 207 |
+
"title": "\u201cDeep iterative surface normal estimation,\u201d",
|
| 208 |
+
"author": "J. Lenssen, C. Osendorfer, and J. Masci,",
|
| 209 |
+
"venue": "in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, jun 2020, pp. 11244\u201311253, IEEE Computer Society.",
|
| 210 |
+
"url": null
|
| 211 |
+
}
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"11": {
|
| 215 |
+
"title": "\u201cCross-Domain Image Retrieval with a Dual Attribute-Aware Ranking Network,\u201d",
|
| 216 |
+
"author": "Junshi Huang, Rogerio Feris, Qiang Chen, and Shuicheng Yan,",
|
| 217 |
+
"venue": "in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1062\u20131070.",
|
| 218 |
+
"url": null
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"12": {
|
| 223 |
+
"title": "\u201cFaceNet: A unified embedding for face recognition and clustering,\u201d",
|
| 224 |
+
"author": "Florian Schroff, Dmitry Kalenichenko, and James Philbin,",
|
| 225 |
+
"venue": "in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.",
|
| 226 |
+
"url": null
|
| 227 |
+
}
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"13": {
|
| 231 |
+
"title": "\u201cLearning incremental triplet margin for person re-identification,\u201d",
|
| 232 |
+
"author": "Yingying Zhang, Qiaoyong Zhong, Liang Ma, Di Xie, and Shiliang Pu,",
|
| 233 |
+
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, July 2019.",
|
| 234 |
+
"url": null
|
| 235 |
+
}
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"14": {
|
| 239 |
+
"title": "\u201cPPFNet: Global context aware local features for robust 3D point matching,\u201d",
|
| 240 |
+
"author": "Haowen Deng, Tolga Birdal, and Slobodan Ilic,",
|
| 241 |
+
"venue": "in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.",
|
| 242 |
+
"url": null
|
| 243 |
+
}
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"15": {
|
| 247 |
+
"title": "\u201cTransductive Zero-Shot Learning for 3D Point Cloud Classification,\u201d",
|
| 248 |
+
"author": "Ali Cheraghian, Shafin Rahman, Dylan Campbell, and Lars Petersson,",
|
| 249 |
+
"venue": "in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Mar. 2020.",
|
| 250 |
+
"url": null
|
| 251 |
+
}
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"16": {
|
| 255 |
+
"title": "\u201cMeshlab: an open-source mesh processing tool.,\u201d",
|
| 256 |
+
"author": "Paolo Cignoni, Marco Callieri, Massimiliano Corsini, Matteo Dellepiane, Fabio Ganovelli, Guido Ranzuglia, et al.,",
|
| 257 |
+
"venue": "in Eurographics Italian chapter conference. Salerno, Italy, 2008, vol. 2008, pp. 129\u2013136.",
|
| 258 |
+
"url": null
|
| 259 |
+
}
|
| 260 |
+
}
|
| 261 |
+
],
|
| 262 |
+
"url": "http://arxiv.org/html/2110.10494v2"
|
| 263 |
+
}
|
20240323/2111.08136v2.json
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Convergence of Anisotropic Consensus-Based Optimization in Mean-Field Law",
|
| 3 |
+
"abstract": "In this paper we study anisotropic consensus-based optimization (CBO), a population-based metaheuristic derivative-free optimization method capable of globally minimizing nonconvex and nonsmooth functions in high dimensions.\nCBO is based on stochastic swarm intelligence, and inspired by consensus dynamics and opinion formation.\nCompared to other metaheuristic algorithms like Particle Swarm Optimization, CBO is of a simpler nature and therefore more amenable to theoretical analysis.\nBy adapting a recently established proof technique, we show that anisotropic CBO converges globally with a dimension-independent rate for a rich class of objective functions under minimal assumptions on the initialization of the method.\nMoreover, the proof technique reveals that CBO performs a convexification of the optimization problem as the number of particles goes to infinity, thus providing an insight into the internal CBO mechanisms responsible for the success of the method.\nTo motivate anisotropic CBO from a practical perspective, we further test the method on a complicated high-dimensional benchmark problem, which is well understood in the machine learning literature.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Several problems arising throughout all quantitative disciplines are concerned with the global unconstrained optimization of a problem-dependent objective function and the search for the associated minimizing argument\nwhich is assumed to exist and be unique in what follows.\nBecause of nowadays data deluge such optimization problems are usually high-dimensional.\nIn machine learning, for instance, one is interested in finding the optimal parameters of a neural network (NN) to accomplish various tasks, such as clustering, classification, and regression.\nThe availability of huge amounts of training data for various real-world applications allows practitioners to work with models involving a large number of trainable parameters aiming for a high expressivity and accuracy of the trained model.\nThis makes the resulting optimization process a high-dimensional problem.\nSince typical model architectures consist of many layers with a large amount of neurons, and include nonlinear and potentially nonsmooth activation functions, the training process is in general a high-dimensional nonconvex optimization problem and therefore a particularly hard task.\nMetaheuristics have a long history as state-of-the-art methods when it comes to tackling hard optimization problems.\nInspired by self-organization and collective behavior in nature or human society, such as the swarming of flocks of birds or schools of fish [3 ###reference_b3###], or opinion formation [20 ###reference_b20###],\nthey orchestrate an interplay between locally confined procedures and global strategies, randomness and deterministic decisions, to ensure a robust search for the global minimizer.\nSome prominent examples are Random Search [19 ###reference_b19###], Evolutionary Programming [7 ###reference_b7###], Genetic Algorithms [11 ###reference_b11###], Ant Colony Optimization [6 ###reference_b6###], Particle Swarm Optimization [14 ###reference_b14###] and Simulated Annealing [1 ###reference_b1###].\nCBO follows those guiding principles, but is of much simpler nature and more amenable to theoretical analysis.\nThe method uses particles , which are initialized independently according to some law , to explore the domain and to form a global consensus about the minimizer as time passes.\nFor parameters the dynamics of each particle is given by\nwhere denotes the empirical measure of the particles.\nThe first term in (1 ###reference_###) is a drift term dragging the respective particle towards the momentaneous consensus point, a weighted average of the particles\u2019 positions, computed as\nand motivated by the fact that for if the is unique.\nTo feature the exploration of the energy landscape of , the second term in (1 ###reference_###) is a diffusion injecting randomness into the dynamics through independent standard Brownian motions .\nThe two commonly studied diffusion types are isotropic [18 ###reference_b18###, 2 ###reference_b2###, 9 ###reference_b9###] and anisotropic [4 ###reference_b4###] diffusion with\nwhere is the identity matrix and the operator mapping a vector onto a diagonal matrix with the vector as its diagonal.\nThe term\u2019s scaling encourages in particular particles far from to explore larger regions.\nThe coordinate-dependent scaling of anisotropic diffusion has proven to be particularly beneficial for high-dimensional optimization problems by yielding dimension-independent convergence rates (see Figure 1 ###reference_###) and therefore improving both computational complexity and success probability of the algorithm [4 ###reference_b4###, 8 ###reference_b8###].\nA theoretical convergence analysis of the CBO dynamics is possible either on the microscopic level (1 ###reference_###) or by analyzing the macroscopic behavior of the particle density through a mean-field limit.\nIn the large particle limit a particle is not influenced by individual particles but only by the average behavior of all particles.\nAs shown in [12 ###reference_b12###], the empirical random particle measure converges in law to the deterministic particle density , which weakly (see Definition 1 ###reference_inition1###) satisfies the non-linear Fokker-Planck equation\nA quantitative analysis of the convergence rate remains, on non-compact domains, an open problem, see, e.g., [9 ###reference_b9###, Remark 2].\nAnalyzing a mean-field limit such as (2 ###reference_###) allows for establishing strong qualitative theoretical guarantees about CBO methods, paving the way to understand the internal mechanisms at play."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Global Convergence in Mean-Field Law",
|
| 15 |
+
"text": "In this section we first recite a well-posedness result about the Fokker-Planck equation (2 ###reference_###) and then present the main result about global convergence."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Definition of Weak Solutions and Well-Posedness",
|
| 21 |
+
"text": "We begin by defining weak solutions of the Fokker-Planck equation (2 ###reference_###).\nLet , .\nWe say satisfies the Fokker-Planck equation (2 ###reference_###) with initial condition in the weak sense in the time interval , if we have for all and all\nand pointwise.\nIn what follows the case of CBO with anisotropic diffusion is considered, i.e., in Equations (1 ###reference_###), (2 ###reference_###) and (4 ###reference_###).\nAnalogously to the well-posedness results [2 ###reference_b2###, Theorems 3.1, 3.2] for CBO with isotropic diffusion, we can obtain well-posedness of (2 ###reference_###) for anisotropic CBO.\nLet , and consider with , which, for some constants , satisfies\nIf in addition, either , or, for some , satisfies\nthen there exists a law weakly satisfying Equation (2 ###reference_###).\nThe proof is based on the Leray-Schauder fixed point theorem and uses the same arguments as the ones provided for [2 ###reference_b2###, Theorems 3.1, 3.2].\nAs discussed in [9 ###reference_b9###, Remark 7], the proof of Theorem 2.1 ###reference_theorem1### justifies an extension of the test function space in Definition 1 ###reference_inition1### to"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Main Results",
|
| 27 |
+
"text": "We now present the main result about global convergence in mean-field law for objective functions that satisfy the following conditions.\nWe consider functions , for which\nthere exists such that , and\nthere exist , and such that\nAssumption A2 ###reference_i2### can be regarded as a tractability condition of the energy landscape around the minimizer and in the farfield.\nEquation (5 ###reference_###) requires the local coercivity of , whereas (6 ###reference_###) prevents that far away from .\nDefinition 2 ###reference_inition2### covers a wide range of function classes, including for instance the Rastrigin function, see Figure 0(a) ###reference_sf1###, and objectives related to various machine learning tasks, see, e.g., [10 ###reference_b10###].\nLet be as in Definition 2 ###reference_inition2###.\nMoreover, let be such that\nDefine .\nFix any and , parameters with , and the time horizon\nThen there exists , which depends (among problem dependent quantities) on and , such that for all ,\nif is a weak solution to the Fokker-Planck equation (2 ###reference_###) on the time interval with initial condition , we have .\nFurthermore, until reaches the prescribed accuracy , we have the exponential decay\nand, up to a constant, the same behavior for .\nThe rate of convergence obtained in Theorem 2.2 ###reference_theorem2### is confirmed numerically by the experiments depicted in Figure 1 ###reference_###.\nWe emphasize the dimension-independent convergence of CBO with anisotropic diffusion, contrasting the dimension-dependent rate of isotropic CBO, cf. [9 ###reference_b9###, Theorem 12].\nRefining the argument of the proof of Theorem 2.2 ###reference_theorem2### allows to show that the time , where is achieved, satisfies .\nFor the technical details we refer to the proof of [9 ###reference_b9###, Theorem 12],\nwhere an analogous statement is shown to hold in the setting of isotropic noise.\nA proof for the herein investigated anisotropic setting is provided in full detail in the dissertation of the third author."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Proof of Theorem 2.2",
|
| 33 |
+
"text": "This section provides the proof details for Theorem 2.2 ###reference_theorem2###, starting with a sketch in Section 3.1 ###reference_###.\nSections 3.2 ###reference_###\u20133.4 ###reference_### present statements, which are needed in the proof and may be of independent interest. Section 3.5 ###reference_### completes the proof.\nWithout loss of generality we assume throughout the proof."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Proof Sketch",
|
| 39 |
+
"text": "The main idea is to show that satisfies the differential inequality\nuntil .\nThe first step towards (9 ###reference_###) is to derive a differential inequality for using the dynamics of , which is done in Lemma 1 ###reference_ma1###.\nIn order to control the appearing quantity , we establish a quantitative Laplace principle.\nNamely, under the inverse continuity property A2 ###reference_i2###, Proposition 1 ###reference_position1### shows\nwhere is decreasing with as .\nThus, can be made arbitrarily small by suitable choices of and , as long as we can guarantee for all and at all times .\nThe latter requires non-zero initial mass as well as an active Brownian motion, as made rigorous in Proposition 2 ###reference_position2###."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Evolution of the Mean-Field Limit",
|
| 45 |
+
"text": "We now derive the evolution inequality of the energy functional .\nLet , and fix .\nMoreover, let and let be a weak solution to Equation (2 ###reference_###).\nThen satisfies\nNoting that is in and recalling that satisfies the identity (4 ###reference_###) for all test functions in , see Remark 1 ###reference_ark1###,\nwe obtain\nwhere we used and for all .\nFollowing the steps taken in [9 ###reference_b9###, Lemma 17] yields the statement."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Quantitative Laplace Principle",
|
| 51 |
+
"text": "The Laplace principle asserts that as as long as the global minimizer is in the support of .\nUnder the assumption of the inverse continuity property this can be used to qualitatively characterize the proximity of to the global minimizer .\nHowever, as it neither allows to quantify this proximity nor gives a suggestion on how to choose to reach a certain approximation quality, we introduced a quantitative version in [9 ###reference_b9###, Proposition 21], which we now adapt suitably to satisfy the anisotropic setting.\nLet , and fix . For any we define .\nThen, under the inverse continuity property A2 ###reference_i2###, for any and such that , we have\nFollowing the lines of the proof of [9 ###reference_b9###, Proposition 22] but replacing all balls and norms by balls and norms, respectively, we obtain\nThe statement now follows noting that ."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.4",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "A Lower Bound for the Probability Mass around",
|
| 57 |
+
"text": "In this section we provide a lower bound for the probability mass of , where is a small radius.\nThis is achieved by defining a mollifier so that and studying the evolution of the right-hand side.\nFor we define the mollifier by\nWe have , , and\nis a tensor product of classical well-studied mollifiers.\nLet , , and fix parameters .\nAssume weakly solves the Fokker-Planck equation (2 ###reference_###) in the sense of\nDefinition 1 ###reference_inition1### with initial condition and for .\nThen, for all we have\nwith\nfor any with \nand for any satisfying .\nIn order to ensure a finite decay rate in Proposition 2 ###reference_position2### it is crucial to have a non-vanishing diffusion .\nBy the properties of the mollifier in Lemma 2 ###reference_ma2### we have\nOur strategy is to derive a lower bound for the right-hand side of this inequality.\nUsing the weak solution property of and the fact that , we obtain\nfor .\nWe now aim for showing uniformly on individually for each and for as in the statement.\nSince the mollifier and its derivatives vanish outside of \nwe restrict our attention to the open -ball .\nTo achieve the lower bound over , we introduce for each the subsets\nand\nwhere .\nFor fixed we now decompose according to\nwhich is illustrated in Figure 2 ###reference_### for different positions of and values of .\n###figure_1### ###figure_2### ###figure_3### In the following we treat each of these three subsets separately.\nSubset :\nWe have for each , which can be used to independently derive lower bounds for both terms and .\nFor , we insert the expression for from Lemma 2 ###reference_ma2### to get\nwhere is used in the last inequality.\nFor we insert the expression for from Lemma 2 ###reference_ma2### to obtain\nwhere the last inequality uses .\nSubset :\nAs we have .\nWe observe that for all in this subset whenever\nThe first term on the left-hand side in (17 ###reference_###) can be bounded from above exploiting that and by using the relation .\nMore precisely, we have\nwhere the last inequality follows since .\nFor the second term on the left-hand side in (17 ###reference_###) we can use as per assumption, to get\nHence, (17 ###reference_###) holds and we have uniformly on this subset.\nSubset :\nAs we have .\nWe first note that whenever , provided , so nothing needs to be done if .\nOtherwise, if , we exploit to get\nUsing this, can be bounded from below by\nFor , the nonnegativity of implies , whenever\nThis holds for , if as implied by the assumption.\nConcluding the proof:\nUsing the evolution of as in (13 ###reference_###) and the individual decompositions of for the terms , we now get\nAn application of Gr\u00f6nwall\u2019s inequality concludes the proof."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.5",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Proof of Theorem 2.2",
|
| 63 |
+
"text": "We now have all necessary tools to conclude the global convergence proof.\nLet us first choose the parameter such that\nwhere we introduce the definitions\nas well as\nMoreover, is as defined in (12 ###reference_###) in Proposition 2 ###reference_position2### with and with .\nBy construction, and .\nMoreover, recalling the notation from Proposition 1 ###reference_position1###, we have by definition of .\nFurthermore, since , the continuity of ensures that there exists such that for all , yielding also .\n\nLet us now define the time horizon by\nwith .\nNotice for later use that .\nOur aim is to show that with and that we have at least exponential decay until reaches the prescribed accuracy .\n\nFirst, however, let us ensure that , which follows from the continuity of the mappings and since and .\nWhile the former holds by assumption, the latter follows by applying Proposition 1 ###reference_position1### with and as defined before, which gives\nwhere the first inequality in the last line holds by the choice of in (18 ###reference_###).\n\nLet us now show that decays at least exponentially fast up to time .\nLemma 1 ###reference_ma1### provides an upper bound for the time derivative of , given by\nCombining this with the definition of in (20 ###reference_###) we have by construction\nGr\u00f6nwall\u2019s inequality implies the upper bound\nAccordingly, we note that is decreasing in , which implies the decay of the function as well.\nHence, recalling the definition of , we may bound\nWe now conclude by showing with .\nFor this we distinguish the following three cases.\n\nCase :\nIf , we can use the definition of in (8 ###reference_###) and the time-evolution bound of in (22 ###reference_###) to conclude that .\nHence, by definition of in (20 ###reference_###), we find and .\n\nCase and :\nNothing needs to be discussed in this case.\n\nCase and :\nWe shall show that this case can never occur by verifying that due to the choice of in (18 ###reference_###).\nNamely, by applying again Proposition 1 ###reference_position1### with and as defined before, we get\nSince, thanks to (23 ###reference_###), we have the bound for , which is in particular independent of , Proposition 2 ###reference_position2### guarantees that there exists a independent of (but dependent on and ) with\nwhere we used (7 ###reference_###) for bounding the initial mass and the fact that (as defined in Equation (10 ###reference_###)) is bounded from below on by .\nWith this we can continue the chain of inequalities in (24 ###reference_###) to obtain\nwhere the first inequality in the last line holds by the choice of in (18 ###reference_###).\nThis establishes the desired contradiction, again as consequence of the continuity of the mappings and ."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4",
|
| 67 |
+
"parent_section_id": null,
|
| 68 |
+
"section_name": "A Machine Learning Example",
|
| 69 |
+
"text": "In this section, we showcase the practicability of the implementation of anisotropic CBO as described in [4 ###reference_b4###, Algorithm 2.1] for problems appearing in machine learning by training a shallow and a convolutional NN (CNN) classifier for the MNIST dataset of handwritten digits [16 ###reference_b16###].\nLet us emphasize that it is not our aim to challenge the state of the art for this task by employing the most sophisticated model or intricate data preprocessing.\nWe merely believe that this is a well-understood, complex, high-dimensional benchmark to demonstrate that CBO achieves good results already with limited computational capacities.\nLet us now describe the NN architectures used in our numerical experiment, see also Figure 3 ###reference_###.\n###figure_4### ###figure_5### Each input image is represented by a matrix of dimension with entries valued between and depending on the grayscale of the respective pixel.\nFor the shallow neural net (see Figure 2(a) ###reference_sf1###) the image is first reshaped to a vector before being passed through a dense layer of the form with trainable weight matrix and bias vector .\nThe CNN (see Figure 2(b) ###reference_sf2###) has learnable kernels and its architecture is similar to the one of the LeNet-1, cf. [15 ###reference_b15###, Section III.C.7].\nIn both networks a batch normalization step is included after each activation, which entails a considerably faster training process.\nMoreover, in the final layers a softmax activation function is applied so that the output can be interpreted as a probability distribution over the digits.\nIn total, the number of unknowns to be trained in case of the shallow NN is , which compares to free parameters for the CNN.\nWe denote the parameters of the NN by and its forward pass by .\nAs a loss function during training we use the categorical crossentropy loss with denoting the output of the NN for a training sample consisting of image and label.\nThis gives rise to the objective function , where denote the training samples.\nWhen evaluating the performance of the NN we determine the accuracy on a test set by counting the number of successful predictions.\nThe used implementation of anisotropic CBO combines ideas presented in [10 ###reference_b10###, Section 2.2] with the algorithm proposed in [4 ###reference_b4###].\nMore precisely, it employs random mini-batch ideas when evaluating the objective function and when computing the consensus point , meaning that is only evaluated on a random subset of size of the training dataset and is only computed from a random subset of size of all particles.\nWhile this reduces the computational complexity, it simultaneously increases the stochasticity, which enhances the ability to escape from local optima.\nFurthermore, inspired by Simulated Annealing, a cooling strategy for the parameters and is used as well as a variance-based particle reduction technique similar to ideas from Genetic Algorithms.\nMore specifically, is multiplied by after each epoch, while the diffusion parameter follows the schedule .\nFor our experiments we choose the parameters , and , and discrete time step size for training both the shallow and the convolutional NN.\nWe use particles, which are initialized according to .\nThe mini-batch sizes are and and despite being computed only on a basis of particles, all particles are updated in every time step, referred to as the full update in [4 ###reference_b4###].\nWe emphasize that hyperparameters have not been tuned extensively.\nIn Figure 4 ###reference_### we report the results of our experiment.\n###figure_6### While achieving a test accuracy of almost for the shallow NN, we obtain around accuracy with the CNN.\nFor comparison, when trained with backpropagation with finely tuned parameters, a comparable CNN achieves accuracy, cf. [15 ###reference_b15###, Figure 9].\nIn view of these results, CBO can be regarded as a successful optimizer for machine learning tasks, which performs comparably to the state of the art.\nAt the same time it is worth highlighting that CBO is extremely versatile and customizable, does not require gradient information or substantial hyperparameter tuning and has the potential to be parallelized."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Conclusion",
|
| 75 |
+
"text": "In this paper we establish the global convergence of anisotropic consensus-based optimization (CBO) to the global minimizer in mean-field law with dimension-independent convergence rate by adapting the proof technique developed in [9 ###reference_b9###].\nIt is based on the insight that the dynamics of individual particles follow, on average, the gradient flow dynamics of the map .\nFurthermore, by utilizing the implementation of anisotropic CBO suggested in [4 ###reference_b4###], we demonstrate the practicability of the method by training the well-known LeNet-1 on the MNIST data set, achieving around % accuracy after few epochs with just 100 particles.\nIn subsequent work we plan to extend our theoretical understanding of CBO to the finite particle regime, and aim to provide extensive numerical studies.\nWe also intend to use this approach to explain the mean-field law convergence behavior of other metaheuristics such as Particle Swarm Optimization, see, e.g., [5 ###reference_b5###, 13 ###reference_b13###]."
|
| 76 |
+
}
|
| 77 |
+
],
|
| 78 |
+
"appendix": [],
|
| 79 |
+
"tables": {},
|
| 80 |
+
"image_paths": {
|
| 81 |
+
"1(a)": {
|
| 82 |
+
"figure_path": "2111.08136v2_figure_1(a).png",
|
| 83 |
+
"caption": "(a) The Rastrigin function in one coordinate direction\nFigure 1: A demonstration of the benefit of using anisotropic diffusion in CBO.\nFor the Rastrigin function \u2130\u2062(v)=\u2211k=1dvk2+52\u2062(1\u2212cos\u2061(2\u2062\u03c0\u2062vk))\u2130\ud835\udc63superscriptsubscript\ud835\udc581\ud835\udc51superscriptsubscript\ud835\udc63\ud835\udc5825212\ud835\udf0bsubscript\ud835\udc63\ud835\udc58{\\cal E}(v)\\!=\\!\\sum_{k=1}^{d}\\!v_{k}^{2}\\!+\\!\\frac{5}{2}(1\\!-\\!\\cos(2\\pi v_{k%\n}))caligraphic_E ( italic_v ) = \u2211 start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT italic_v start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + divide start_ARG 5 end_ARG start_ARG 2 end_ARG ( 1 - roman_cos ( 2 italic_\u03c0 italic_v start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) with v*=0superscript\ud835\udc630v^{*}\\!=\\!0italic_v start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = 0 and spurious local minima (see (a)), we evolve the discretized system of isotropic and anisotropic CBO using N=320000\ud835\udc41320000N=320000italic_N = 320000 particles, discrete time step size \u0394\u2062t=0.01\u0394\ud835\udc610.01\\Delta t=0.01roman_\u0394 italic_t = 0.01 and \u03b1=1015\ud835\udefcsuperscript1015\\alpha=10^{15}italic_\u03b1 = 10 start_POSTSUPERSCRIPT 15 end_POSTSUPERSCRIPT, \u03bb=1\ud835\udf061\\lambda=1italic_\u03bb = 1, and \u03c3=0.32\ud835\udf0e0.32\\sigma=0.32italic_\u03c3 = 0.32 for different dimensions d\u2208{4,8,12,16}\ud835\udc51481216d\\in\\{4,8,12,16\\}italic_d \u2208 { 4 , 8 , 12 , 16 }.\nWe observe in (b) that the convergence rate of the energy functional \ud835\udcb1\u2062(\u03c1^tN)\ud835\udcb1superscriptsubscript^\ud835\udf0c\ud835\udc61\ud835\udc41{\\cal V}(\\widehat{\\rho}_{t}^{N})caligraphic_V ( over^ start_ARG italic_\u03c1 end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ) for isotropic CBO (dashed lines) is affected by the ambient dimension d\ud835\udc51ditalic_d, whereas anisotropic CBO (solid lines) converges independently from d\ud835\udc51ditalic_d with rate (2\u2062\u03bb\u2212\u03c32)2\ud835\udf06superscript\ud835\udf0e2(2\\lambda-\\sigma^{2})( 2 italic_\u03bb - italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).",
|
| 84 |
+
"url": "http://arxiv.org/html/2111.08136v2/x1.png"
|
| 85 |
+
},
|
| 86 |
+
"1(b)": {
|
| 87 |
+
"figure_path": "2111.08136v2_figure_1(b).png",
|
| 88 |
+
"caption": "(b) Evolution of \ud835\udcb1\u2062(\u03c1^tN)\ud835\udcb1superscriptsubscript^\ud835\udf0c\ud835\udc61\ud835\udc41{\\cal V}(\\widehat{\\rho}_{t}^{N})caligraphic_V ( over^ start_ARG italic_\u03c1 end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ) for isotropic and anisotropic CBO for different dimensions\nFigure 1: A demonstration of the benefit of using anisotropic diffusion in CBO.\nFor the Rastrigin function \u2130\u2062(v)=\u2211k=1dvk2+52\u2062(1\u2212cos\u2061(2\u2062\u03c0\u2062vk))\u2130\ud835\udc63superscriptsubscript\ud835\udc581\ud835\udc51superscriptsubscript\ud835\udc63\ud835\udc5825212\ud835\udf0bsubscript\ud835\udc63\ud835\udc58{\\cal E}(v)\\!=\\!\\sum_{k=1}^{d}\\!v_{k}^{2}\\!+\\!\\frac{5}{2}(1\\!-\\!\\cos(2\\pi v_{k%\n}))caligraphic_E ( italic_v ) = \u2211 start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT italic_v start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + divide start_ARG 5 end_ARG start_ARG 2 end_ARG ( 1 - roman_cos ( 2 italic_\u03c0 italic_v start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) with v*=0superscript\ud835\udc630v^{*}\\!=\\!0italic_v start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT = 0 and spurious local minima (see (a)), we evolve the discretized system of isotropic and anisotropic CBO using N=320000\ud835\udc41320000N=320000italic_N = 320000 particles, discrete time step size \u0394\u2062t=0.01\u0394\ud835\udc610.01\\Delta t=0.01roman_\u0394 italic_t = 0.01 and \u03b1=1015\ud835\udefcsuperscript1015\\alpha=10^{15}italic_\u03b1 = 10 start_POSTSUPERSCRIPT 15 end_POSTSUPERSCRIPT, \u03bb=1\ud835\udf061\\lambda=1italic_\u03bb = 1, and \u03c3=0.32\ud835\udf0e0.32\\sigma=0.32italic_\u03c3 = 0.32 for different dimensions d\u2208{4,8,12,16}\ud835\udc51481216d\\in\\{4,8,12,16\\}italic_d \u2208 { 4 , 8 , 12 , 16 }.\nWe observe in (b) that the convergence rate of the energy functional \ud835\udcb1\u2062(\u03c1^tN)\ud835\udcb1superscriptsubscript^\ud835\udf0c\ud835\udc61\ud835\udc41{\\cal V}(\\widehat{\\rho}_{t}^{N})caligraphic_V ( over^ start_ARG italic_\u03c1 end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ) for isotropic CBO (dashed lines) is affected by the ambient dimension d\ud835\udc51ditalic_d, whereas anisotropic CBO (solid lines) converges independently from d\ud835\udc51ditalic_d with rate (2\u2062\u03bb\u2212\u03c32)2\ud835\udf06superscript\ud835\udf0e2(2\\lambda-\\sigma^{2})( 2 italic_\u03bb - italic_\u03c3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).",
|
| 89 |
+
"url": "http://arxiv.org/html/2111.08136v2/x2.png"
|
| 90 |
+
},
|
| 91 |
+
"2(a)": {
|
| 92 |
+
"figure_path": "2111.08136v2_figure_2(a).png",
|
| 93 |
+
"caption": "(a) v\u03b1\u2062(\u03c1t)\u2208\u03a9rsubscript\ud835\udc63\ud835\udefcsubscript\ud835\udf0c\ud835\udc61subscript\u03a9\ud835\udc5fv_{\\alpha}({\\rho_{t}})\\in\\Omega_{r}italic_v start_POSTSUBSCRIPT italic_\u03b1 end_POSTSUBSCRIPT ( italic_\u03c1 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) \u2208 roman_\u03a9 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT, \u03c3=0.2\ud835\udf0e0.2\\sigma=0.2italic_\u03c3 = 0.2\nFigure 2: Visualization of the decomposition of \u03a9rsubscript\u03a9\ud835\udc5f\\Omega_{r}roman_\u03a9 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT as in (16) for different positions of v\u03b1\u2062(\u03c1t)subscript\ud835\udc63\ud835\udefcsubscript\ud835\udf0c\ud835\udc61v_{\\alpha}({\\rho_{t}})italic_v start_POSTSUBSCRIPT italic_\u03b1 end_POSTSUBSCRIPT ( italic_\u03c1 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) and values of \u03c3\ud835\udf0e\\sigmaitalic_\u03c3.",
|
| 94 |
+
"url": "http://arxiv.org/html/2111.08136v2/x3.png"
|
| 95 |
+
},
|
| 96 |
+
"2(b)": {
|
| 97 |
+
"figure_path": "2111.08136v2_figure_2(b).png",
|
| 98 |
+
"caption": "(b) v\u03b1\u2062(\u03c1t)\u2209\u03a9rsubscript\ud835\udc63\ud835\udefcsubscript\ud835\udf0c\ud835\udc61subscript\u03a9\ud835\udc5fv_{\\alpha}({\\rho_{t}})\\not\\in\\Omega_{r}italic_v start_POSTSUBSCRIPT italic_\u03b1 end_POSTSUBSCRIPT ( italic_\u03c1 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) \u2209 roman_\u03a9 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT, \u03c3=0.2\ud835\udf0e0.2\\sigma=0.2italic_\u03c3 = 0.2\nFigure 2: Visualization of the decomposition of \u03a9rsubscript\u03a9\ud835\udc5f\\Omega_{r}roman_\u03a9 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT as in (16) for different positions of v\u03b1\u2062(\u03c1t)subscript\ud835\udc63\ud835\udefcsubscript\ud835\udf0c\ud835\udc61v_{\\alpha}({\\rho_{t}})italic_v start_POSTSUBSCRIPT italic_\u03b1 end_POSTSUBSCRIPT ( italic_\u03c1 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) and values of \u03c3\ud835\udf0e\\sigmaitalic_\u03c3.",
|
| 99 |
+
"url": "http://arxiv.org/html/2111.08136v2/x4.png"
|
| 100 |
+
},
|
| 101 |
+
"2(c)": {
|
| 102 |
+
"figure_path": "2111.08136v2_figure_2(c).png",
|
| 103 |
+
"caption": "(c) v\u03b1\u2062(\u03c1t)\u2209\u03a9rsubscript\ud835\udc63\ud835\udefcsubscript\ud835\udf0c\ud835\udc61subscript\u03a9\ud835\udc5fv_{\\alpha}({\\rho_{t}})\\not\\in\\Omega_{r}italic_v start_POSTSUBSCRIPT italic_\u03b1 end_POSTSUBSCRIPT ( italic_\u03c1 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) \u2209 roman_\u03a9 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT, \u03c3=1\ud835\udf0e1\\sigma=1italic_\u03c3 = 1\nFigure 2: Visualization of the decomposition of \u03a9rsubscript\u03a9\ud835\udc5f\\Omega_{r}roman_\u03a9 start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT as in (16) for different positions of v\u03b1\u2062(\u03c1t)subscript\ud835\udc63\ud835\udefcsubscript\ud835\udf0c\ud835\udc61v_{\\alpha}({\\rho_{t}})italic_v start_POSTSUBSCRIPT italic_\u03b1 end_POSTSUBSCRIPT ( italic_\u03c1 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) and values of \u03c3\ud835\udf0e\\sigmaitalic_\u03c3.",
|
| 104 |
+
"url": "http://arxiv.org/html/2111.08136v2/x5.png"
|
| 105 |
+
},
|
| 106 |
+
"3(a)": {
|
| 107 |
+
"figure_path": "2111.08136v2_figure_3(a).png",
|
| 108 |
+
"caption": "(a) Shallow NN with one dense layer\nFigure 3: Architectures of the NNs used in the experiments of Section 4.",
|
| 109 |
+
"url": "http://arxiv.org/html/2111.08136v2/x6.png"
|
| 110 |
+
},
|
| 111 |
+
"3(b)": {
|
| 112 |
+
"figure_path": "2111.08136v2_figure_3(b).png",
|
| 113 |
+
"caption": "(b) Convolutional NN (LeNet-1) with two convolutional and two pooling layers, and one dense layer\nFigure 3: Architectures of the NNs used in the experiments of Section 4.",
|
| 114 |
+
"url": "http://arxiv.org/html/2111.08136v2/x7.png"
|
| 115 |
+
},
|
| 116 |
+
"4": {
|
| 117 |
+
"figure_path": "2111.08136v2_figure_4.png",
|
| 118 |
+
"caption": "Figure 4: Comparison of the performances of a shallow (dashed lines) and convolutional (solid lines) NN with architectures as described in Figures 2(a) and 2(b), when trained with a discretized version of the anisotropic CBO dynamics (1). Depicted are the accuracies on a test dataset (orange lines) and the values of the objective function \u2130\u2130{\\cal E}caligraphic_E (blue lines), which was chosen to be the categorical crossentropy loss on a random sample of the training set of size 10000100001000010000.",
|
| 119 |
+
"url": "http://arxiv.org/html/2111.08136v2/x8.png"
|
| 120 |
+
}
|
| 121 |
+
},
|
| 122 |
+
"validation": true,
|
| 123 |
+
"references": [],
|
| 124 |
+
"url": "http://arxiv.org/html/2111.08136v2"
|
| 125 |
+
}
|
20240323/2202.04476v3.json
ADDED
|
@@ -0,0 +1,424 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Counting Kernels in Directed Graphs with Arbitrary Orientations",
|
| 3 |
+
"abstract": "A kernel of a directed graph is a subset of vertices that is both independent and absorbing (every vertex not in the kernel has an out-neighbour in the kernel).",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "A digraph is a simple, loopless, finite graph plus a choice of orientation for each edge, which may be in either direction or in both. A kernel111No relation to the homonymous concept in parametrised complexity. of a digraph is a subset of vertices that is independent and absorbing, that is, is a kernel iff the vertices with at least one outgoing edge to are exactly those not in ."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Counting kernels in fuzzy circular interval graphs",
|
| 15 |
+
"text": "In (undirected) claw-free graphs, the independent sets have remarkable properties allowing efficient algorithms for maximum independent set and related problems [FOS14 ###reference_bx17###, Her+19 ###reference_bx25###]. This makes them a popular graph class for algorithmic applications.\nIn a series of seven papers and a survey, [CS05 ###reference_bx11###] gave a structure theorem for claw-free graphs. Disregarding subclasses with bounded independence number (amenable to brute force as far as kernels are concerned), the quasi-line graphs are an important stepping stone in this decomposition. They are the graphs in which the neighbourhood of every vertex is the union of two cliques. In turn, connected quasi-line graphs are either fuzzy circular interval graphs (see below), or generalised line graphs \\cites[\\nopp1.1]Chudnovsky\u2013Seymour-quasi-line[\\nopp2.2]Chudnovsky\u2013Seymour-survey. Since the existence of kernels is -complete on arbitrarily-oriented line graphs [Azi+22 ###reference_bx3###], we focus on the first class. See Fig. 1 ###reference_### for a diagram depicting the inclusions between all these classes.\n\n###figure_1###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Fuzzy circular interval graphs and subclasses",
|
| 21 |
+
"text": "For every two points and of the unit circle , define as the closed interval of joining to anticlockwise, with . Also let , and so on.\nFollowing [CS05 ###reference_bx11###], a graph is a fuzzy circular interval graph (FCIG) if it has a model consisting of a function and a set of intervals of of the form with such that:\nno two intervals in share an endpoint;\nthere are no proper inclusions among members of ;\nfor every two adjacent vertices and there is some interval such that ;\nfor every two non-adjacent vertices and and interval , if then the endpoints of are and .\nThe model does not fully specify the adjacency relation between vertices it sends to distinct endpoints of the same interval (hence \u2018fuzzy\u2019). Two FCIGs may admit the same model. For example, consider the vertex set , and a model having and with distinct and . This model describes several graphs: a cycle, a path, a clique, and the union of two edges.\nThe graph is a fuzzy linear interval graph (FLIG), resp. circular interval graph (CIG), if it has an FCIG model with , resp. injective, and a linear interval graph (LIG) when it has a model where both hold. In other contexts, CIGs are also known as proper circular arc graphs and LIGs as indifference graphs, unit interval graphs or proper interval graphs. See Fig. 1 ###reference_### for some proper inclusions between these and other graph classes. Note that we could equally well define FLIGs by the same conditions as above, except that the codomain of is the real interval .\nThose four graph classes are each closed under taking induced subgraphs: if , the pair is a model of the subgraph induced by .\nIt is already known that kernel search takes polynomial time on LIGs in which every clique has a sink because LIGs are (strongly) chordal and on CIGs without bidirectional edges because CIGs are circular-arc [PIM20 ###reference_bx32###].\nLet be an FCIG (resp. FLIG). An FCIG (FLIG) model for (resp. is nice when (i) and (ii), for any two distinct vertices and , the equality implies that and are adjacent.\nA nice model can always be computed in time . Furthermore, every model in which already satisfies (ii) can be made nice by deleting some intervals while keeping the same [OPS12 ###reference_bx30###].\nptptptpt\n###figure_2### In the remainder of this section, we prove Theorem 2 ###reference_orem2### in three steps: we obtain a pared-down version for FLIGs via dynamic programming, we extend it to FCIGs, then we discuss how some additional kernel features may be tracked."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "The structure of kernels in fuzzy linear interval graphs",
|
| 27 |
+
"text": "We start with some results about the structure of FLIGs and their kernels. In all that follows, fix an FLIG and a nice model . We will build a weak ordering of whose restriction to each independent set (in particular, to each kernel) is a total order. Intuitively, we will then construct kernels of by \u2018guessing\u2019 their vertices one after the other \u2018from right to left\u2019.\nThe model gives a weak ordering (a \u2018total order with ties\u2019): for every two vertices and put iff . Since the model is nice, if and are non-adjacent then , so either or . Thus every independent set (in particular every kernel) is totally ordered by , even though is in general not total on .\nFurther write iff (equivalently, iff ). Now we may talk about the vertex intervals\nand so forth. In particular is a clique containing not only but all vertices mapped to .\nFor every vertex , the uniqueness of interval endpoints means that there is at most one interval . We set if it exists and otherwise. When nonempty, is a clique that may contain both neighbours and non-neighbours of . Since two intervals in may not share an endpoint,\nFor every two vertices and let the set comprise those kernels of the induced subgraph that contain and :\nIn this section we focus on the sets , for which we give a recursive description. We motivate this choice by observing that, while in general , we do have\nsince a kernel with leftmost and rightmost vertices and belongs to .\nLet us establish three properties of . These will also hold for induced subgraphs of when equipped with the same weak order.\nIf are not adjacent, and then (resp. and ).\nLet : there exists an interval with . But then is not in the interior of , as this would cause and to be adjacent. Hence . The other inclusion is similar.\n\u220e\nIf and with , then absorbs .\nSince is a kernel, every vertex must have an out-neighbour . If or then Lemma 1 ###reference_ma1### yields , which contradicts . Hence .\n\u220e\nIntuitively, Lemma 2 ###reference_ma2### describes the local structure of kernels in : consecutive kernel vertices are non-adjacent and absorb the open vertex interval which they delimit.\nIf and are distinct vertices, and , then absorbs , resp. and .\nFrom Lemma 2 ###reference_ma2### applied to (with and ), absorbs . The sole remaining case is that of a vertex . Since is a kernel, has an out-neighbour . If we are done. Otherwise, it must be that , but then Lemma 1 ###reference_ma1### (with , and ) gives , which contradicts .\nThe second result is symmetrical.\n\u220e\nNow we may describe the sets of the form with . Such sets are obtained by \u2018guessing\u2019 their members in decreasing -order, ensuring at each step that the newly selected vertex satisfies some compatibility conditions. The main source of complications is the possibility of choosing the next vertex from the set after selecting .\nTo address this issue, we sort the kernels in into disjoint subsets depending on their intersection with :\nNext, we introduce the sets from which kernel vertices will be selected at each step. For every two vertices , let\nThe functions and take values, which we may compute in polynomial time.\nRecall that indicates disjoint unions.\nIf are non-adjacent vertices and does not absorb then\nFinally if and does absorb then .\nEquation 6 ###reference_### expresses the fact that a kernel in has at least three vertices, since by hypothesis is not itself a kernel. The disjunction is on whether the second-rightmost kernel vertex belongs to the clique , and in the affirmative on the unique kernel vertex in .\nLet us prove Eq. 7 ###reference_###. It is a disjunction with respect to the second-rightmost vertex in the kernel.\nFor the direct inclusion,\nlet and let be the second-rightmost vertex of .\nObserve that since is an independent set, that (by hypothesis), and that absorbs by Lemma 3 ###reference_ma3###. Hence .\nIt remains to argue that . Its independence and the inclusions are clear. By Lemma 3 ###reference_ma3###, absorbs . We have only to show that every has an out-neighbour in .\nWe do know that has an out-neighbour in , which suffices unless this out-neighbour is . But then some interval would contain both and . Because and are not adjacent, the endpoints of would be and , which contradicts .\nFor the reverse inclusion, fix and and let us show that .\nThe set is independent: by Lemma 1 ###reference_ma1### the only possible exception would be an edge joining and , but rules out such an edge. It absorbs by definition of and by definition of , hence it absorbs . That it contains and is clear.\nEquation 8 ###reference_### is a disjunction with respect to the third rightmost vertex in the kernel. In the edge case where this vertex is , the whole kernel is . Consider the other case.\nFor the direct inclusion, let and let be the third rightmost vertex of . Then since is independent. By Lemma 3 ###reference_ma3###, absorbs . Hence .\nIt remains to show that . Its independence is clear. It does absorb by Lemma 3 ###reference_ma3###. The remaining case is that of . This must have an out-neighbour in by Lemma 2 ###reference_ma2###. If this out-neighbour were , there would be some containing and . Then the independence of would force , in turn leading to the contradiction .\nFor the reverse inclusion, let and . From the definition of it is easy to see that the set is independent and absorbs .\nFinally, the case when and does absorb is clear.\n\u220e\nTogether, Eqs. 6 ###reference_###, 7 ###reference_### and 8 ###reference_### let us express as a function of at most sets of the form . We view these results as defining a subproblem digraph whose nodes are pairs with , plus a terminal node for each . If and are non-adjacent and does not absorb , the node of has the following outgoing edges:\nfor each , an edge labelled with the symbol to the node ,\nfor each and , an edge labelled with the concatenation to the node , unless absorbs , in which case the sole outgoing edge is labelled and goes to .\nWhereas if does absorb the node has an edge labelled to .\nOur description of the sets is summarised by the following lemma.\nGiven an FLIG and the weak order on its vertex set coming from a nice FLIG model, we build a digraph on the vertex set\nin time polynomial in . This digraph is acyclic; it has nodes and maximum out-degree . Each edge is labelled with one or two vertices of .\nFor every pair of vertices with , the set is in bijection with the set of directed paths from the node to the node . Given one such path, the vertices of the corresponding kernel are read off in decreasing -order from the edge labels, omitting .\nThis translates properties of the set such as its cardinality or its smallest elements into well-trodden algorithmic problems on acyclic digraphs (the number of directed paths joining two nodes, the shortest directed paths with edge lengths in )."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Reducing the circular case to the linear case",
|
| 33 |
+
"text": "Now we show how the set of all kernels in an FCIG is expressed in term of the sets in some FLIGs that are all induced subgraphs of . Before we consider FCIGs however, let us mention a simple fact about kernels in general digraphs (proof omitted).\nIf is a digraph and then\nNow let be an FCIG and let be an arbitrary vertex of . In polynomial time, we compute and fix a nice FCIG model of [OPS12 ###reference_bx30###]. We may use the same interval notation as for FLIGs, with the caveat that for every two vertices and the vertex intervals and are generally distinct and intersect in .\nFor each kernel , either , or there exists a unique pair such that and . That is, setting ,\nSingle-vertex kernels can be enumerated by brute force in time . Hence we are left to deal with the set for every two distinct vertices and . It is not hard to see that:\nIf and are not adjacent and absorbs , then is the set of kernels of containing and . If either condition fails, .\nThe first part is easy: there can be no other kernel vertex in as . The second part involves an FCIG analogue of Lemma 2 ###reference_ma2###, proven similarly.\n\u220e\nGiven and non-adjacent and such that absorbs , consider the subgraph as well as the FCIG interval model it inherits from .\nBy Lemma 6 ###reference_ma6###, we can delete vertices in without modifying the set of kernels containing and , so we delete . We now have . If the interval is in , we delete it and retain a valid model, since there is no adjacency it could contribute. Now there are no intervals in containing both and . Without removing any edge, we may shrink the intervals in until some sub-interval of is not covered by .\nThis shows that the resulting induced subgraph is an FLIG with the same set of kernels containing both and as , and has an FLIG model where and are the leftmost and rightmost vertices. This model might not be nice (if we deleted more vertices than intervals), but can be made nice again by deleting unnecessary intervals [OPS12 ###reference_bx30###]. See Figure 3 ###reference_### for an illustration.\nThus, computing the cardinality (or an element, or the number of elements of a certain size) of amounts to computing in the FLIG as in Section 2.2 ###reference_###.\nCombining the present results with those in Section 2.2 ###reference_###, we obtain Theorem 1 ###reference_orem1###.\nThe proof of Theorem 1 ###reference_orem1### relies heavily on the structure of the graphs at hand. Namely, the weak vertex ordering arising from a LIG or FLIG model allows for a recursive approach. For example, LIGs can be understood as the intersection graphs of unit intervals\u2013the whole graph structure thus determined by the position of left endpoints. By contrast, there is no such convenient vertex ordering in general interval graphs."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "Extensions of the algorithm",
|
| 39 |
+
"text": "Theorem 1 ###reference_orem1### can be extended to Theorem 2 ###reference_orem2### below.\nGiven a fuzzy circular interval graph with arbitrary edge orientations, two subsets , integral vertex weights , and , consider the set\nIn time polynomial in , we can compute its number of elements and, if nonempty, return one of them.\nIn order to do so we modify the graph from Section 2.2 ###reference_### by refining the definitions of and as needed, to ensure that kernel vertices respect the newly introduced constraints.\nIf there are two subsets and we want to encode only kernels that include and are included in , then we may replace Eq. 4 ###reference_### with\nand Eq. 5 ###reference_### similarly. This amounts to deleting some of the edges in : the edges with a label not in and the out-edges from a node with at least one label such that .\nIf is a weight function, then we define the weight of each edge in as the sum of the weights of its labels. For each node and terminal of , we can list the weights of all directed paths from to with multiplicity in a bottom-up fashion, since is acyclic. This still takes time polynomial in , independent of ."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.5",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "Running times",
|
| 45 |
+
"text": "Let us argue that the running times of our algorithms for counting kernels are\non FCIGs,\non FLIGs and CIGs,\non LIGs (proper interval graphs).\nThe subproblem digraph from Section 2.2 ###reference_### can be partitioned into parts, each induced by the vertex set for some . Equivalently, each part correspond to kernels with a same leftmost vertex. There are no edges between parts (and parts need not be connected). Each part has vertices. Hence, for every fixed vertex , computing all values of takes time once the part containing has been constructed. Said construction is usually the bottleneck.\nFor a fixed , let\nBased on this observation, here is a possible algorithm that computes for a fixed and all values , with the vertex and the subset as inputs. Note that, since is not quite a total order but a weak order, it partitions into classes () and (through quotienting) induces a total order on the set of these classes.\nIt runs in time . Similarly, if for every and we set\nand a similar algorithm computes the values of for every and every in time . Hence, for any fixed it takes time to build only the part of containing , or to build all parts.\nAs for FCIGs, each set from Section 2.3 ###reference_### reduces to a set in an FLIG that potentially depends on both and , but careful consideration of the deleted vertices lets us argue that said FLIG only depends on . Thus we have to build parts, for a total running time of .\nIn circular interval graphs, we can ensure the additional model property that for every vertex . This leads to an empty domain for , and therefore to a running time of .\nWe can simplify the analysis by tracking only the rightmost kernel vertex. For example, add an isolated dummy vertex at the left end of the -ordering. Because is isolated, it belongs to all kernels. Thus we only need to compute the sets where , that is, we only have to build the part of that contains . This takes time , which is essentially optimal in the sense that it is also the running time of the FLIG recognition algorithm.\nIn linear interval graphs, we combine the insights for FLIGs and CIGs: we build a single part of the subproblem digraph and do not compute . This leads to a running time . We leave the matter of getting to e.g. open."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Further remarks",
|
| 51 |
+
"text": ""
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.1",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Counting maximal and maximum independent sets",
|
| 57 |
+
"text": "An independent set of a graph is maximal when it has no independent superset, and maximum when its cardinality is largest among independent sets. Clearly, the latter entails the former.\nThe kernels of a digraph with all edges bidirectional are exactly its maximal independent sets. Hence Theorem 2 ###reference_orem2### translates into polynomial-time algorithms for counting maximal independent sets, maximal independent sets of a certain size, and thus also maximum independent sets in FCIGs; see Corollaries 2 ###reference_ollary2### and 3 ###reference_ollary3###. Compare with chordal graphs and other quasi-line graphs:\nCounting maximal independent sets is -complete on chordal graphs.\nCounting maximum independent sets is -complete on line graphs of bipartite graphs.\nLet be a bipartite graph. In polynomial time, we decide whether has perfect matchings by building a largest matching (e.g. by the Ford\u2013Fulkerson algorithm) and checking whether it is perfect. In the affirmative, the perfect matchings of are the maximum independent sets of its line graph. Now an algorithm that counts the latter also counts the former, but counting the perfect matchings of bipartite graphs is -complete [Val79 ###reference_bx37###].\n\u220e"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.2",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Tree-width, clique-width",
|
| 63 |
+
"text": "The tree-width of a digraph is an integer measuring its complexity on a scale on which trees are simplest [CE12 ###reference_bx9###]. Note that does not depend on edge orientations.\nA set of vertices being a kernel is expressible in the monadic second-order logic of directed graphs (see e.g. [ALS91 ###reference_bx1###]), for example by the formula:\nwhere the relation symbol encodes directed edges, so the counting version of Courcelle\u2019s theorem [CE12 ###reference_bx9###, Th. 6.56] gives\nFor some computable function there exist algorithms that, given a digraph , compute , and, if , build a largest and a smallest kernel, all of it in time .\nThe proofs, constructive but quite involved, yield fast growing functions , making said algorithms impractical. Still, this approach has interesting consequences, such as a polynomial-time algorithm for deciding the existence of kernels of size in planar digraphs [Gut+05 ###reference_bx24###].\nAs for our Theorem 2 ###reference_orem2###, there is little overlap with these results since even LIGs have arbitrarily large cliques, hence unbounded tree-width.\nClique-width, the other most common complexity measure, is as coarse as tree-width in our orientation-agnostic setting. Indeed, unlike tree-width, clique-width strongly depends on edge orientations: if a class of undirected graphs has unbounded tree-width (say, if contains all complete graphs) then the digraph class obtained by taking all possible edge orientations on members of has unbounded clique-width [CE12 ###reference_bx9###, Prop. 2.117]. In fact, we show that the existence of kernels is -complete on cographs (which, when undirected, are the graphs of clique-width at most ): see Theorem 4 ###reference_orem4### below. This contrasts with our Theorem 2 ###reference_orem2###, and with the positive results on adequately oriented cographs [AS05 ###reference_bx2###].\nLikewise, bi-min-width is another complexity measure with applications to kernel-finding and related problems [JKT22 ###reference_bx26###] which, being unbounded already on orientations of complete graphs, cannot serve our purposes."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.3",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "Cographs and threshold graphs",
|
| 69 |
+
"text": "One may define cographs inductively as being either a single-vertex graph, or the union of and , or the join444One obtains the join of two disjoint undirected graphs by adding every possible edge between them. of and , where and are smaller disjoint cographs. In particular, cliques, independent sets, and matchings are easily identified as cographs.\nThe existence of kernels is -complete on cographs with bidirectional edges.\nThe problem is clearly in .\nFix an instance of the Boolean satisfiability problem (SAT) in conjunctive normal form: a set of variables which defines a set of literals , and a set of disjunctive clauses . A solution of the SAT instance is a subset such that for each variable and intersects every clause in .\nBuild a digraph on by putting a bidirectional edge between every pair of clauses in , and between every variable and its negation . Also add two special vertices and with an edge from to . Finally, for each clause put a directed edge from to the literals that appear in and to , as well as directed edges to from the literals that do not appear in and from . The resulting digraph on , which we will call , has the following properties:\nhas vertices;\n(the undirected graph underlying) is a cograph, as the join of a complete graph on and a perfect matching on ;\nIf is a kernel of then and . Indeed, since , either or , but in the first case contradicts the independence of , since must have an out-neighbour in . Hence , which implies ;\nIn view of the previous fact, kernels are subsets of literals (plus ) containing exactly one of and for each variable , and such that each has at least one out-neighbour in the kernel. Thus the kernels of are in bijection with the solutions of the SAT instance.\nHence this construction is a polynomial-time parsimonious reduction of SAT to the existence of kernels in cographs.\n\u220e\nWe conclude by noting that the counting problem is not only polynomial but linear on a particular subclass of cographs, even though said subclass, having unbounded tree-width, does not fall under the purview of Theorem 3 ###reference_orem3###.\nA cograph is a threshold graph if it has a cograph construction sequence as above where is a single vertex at each step.\nCounting the kernels of an arbitrarily oriented threshold graph and, when applicable, building one, takes linear time . If is the clique number, enumerating its kernels takes time .\nThese results are already known for the maximal independent sets in undirected threshold graphs [GR20 ###reference_bx22###].\nFor every set , let . A threshold graph construction sequence for is found in linear time [MP95 ###reference_bx29###, \u00a71.4.2], so we give a recursive description of .\nIf is a single vertex , then\nIf is the disjoint union of a vertex and a subgraph , then\nIf is the join of a vertex and a subgraph , then\n iff absorbs and (and we can check these conditions in time ) and regardless\nThis lets us compute in time where (for some absolute constant ) the three cases above give\nFor enumerating the same reasoning applies, except that each \u2018disjoint union\u2019 step spends constant time appending a vertex to each set in . As a threshold graph, has maximal independent sets [GR20 ###reference_bx22###], so and we obtain\nThe maximal independent sets of a graph are enumerable with polynomial delay, hence, if there are such sets, enumerable in total time . This is unlikely to extend to kernels. Even if only on cographs, a polynomial-delay algorithm for kernels would solve SAT and (because of the parsimonious reduction) an enumeration algorithm polynomial in the total size of its output would solve unambiguous-SAT. This would respectively establish and, by the Valiant\u2013Vazirani theorem, [VV86 ###reference_bx39###]."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.4",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "Other graph classes",
|
| 75 |
+
"text": "A graph is concave-round if its vertices have a circular ordering in which every closed neighbourhood (that is, ) is an interval [BHY00 ###reference_bx6###]. Concave-round graphs are quasi-line. They form a subclass of normal circular-arc graphs [Saf20 ###reference_bx34###] and a proper superclass of CIGs. Unknowingly, we have already solved the problem for this class, because of the following easy but unpublished inclusion.\nConcave-round graphs form a proper subclass of the fuzzy circular interval graphs. More precisely, they are either fuzzy linear interval graphs or circular interval graphs (proper circular arc graphs).\nA concave-round graph is (at least) one of circular interval and co-bipartite [Tuc71 ###reference_bx36###, Saf20 ###reference_bx34###]. In turn, co-bipartite graphs (i.e. graphs with a partition into two cliques) are clearly FLIGs: they admit a single-interval model, with each vertex mapped to one endpoint.\n\u220e\nThere are some results on kernel problems in interval digraphs [FHJ21 ###reference_bx16###]. However, said interval digraphs are quite different from arbitrarily-directed interval graphs: interval graphs are chordal, whereas every directed cycle is an interval digraph in the sense of [FHJ21 ###reference_bx16###]. Hence there is little overlap with our results."
|
| 76 |
+
}
|
| 77 |
+
],
|
| 78 |
+
"appendix": [],
|
| 79 |
+
"tables": {},
|
| 80 |
+
"image_paths": {
|
| 81 |
+
"1": {
|
| 82 |
+
"figure_path": "2202.04476v3_figure_1.png",
|
| 83 |
+
"caption": "Figure 1: Proper inclusions among some graph classes. Inclusions without a reference are well-known or obvious.\nFigure 2 shows a fuzzy linear interval graph which is not circular arc, and the claw K1,3subscript\ud835\udc3e13K_{1,3}italic_K start_POSTSUBSCRIPT 1 , 3 end_POSTSUBSCRIPT itself is an interval graph that is not claw-free. The Information System on Graph Classes and their Inclusions is a helpful resource [Rid].",
|
| 84 |
+
"url": "http://arxiv.org/html/2202.04476v3/x1.png"
|
| 85 |
+
},
|
| 86 |
+
"2": {
|
| 87 |
+
"figure_path": "2202.04476v3_figure_2.png",
|
| 88 |
+
"caption": "Figure 2: A graph and a nice FLIG model. The reader can check that this graph is not an LIG, nor more generally a circular arc graph. Hint: the disjoint union of a 4-cycle and a vertex is not a circular arc graph.",
|
| 89 |
+
"url": "http://arxiv.org/html/2202.04476v3/x2.png"
|
| 90 |
+
}
|
| 91 |
+
},
|
| 92 |
+
"validation": true,
|
| 93 |
+
"references": [
|
| 94 |
+
{
|
| 95 |
+
"1": {
|
| 96 |
+
"title": "\u201cEasy Problems for Tree-Decomposable Graphs\u201d",
|
| 97 |
+
"author": "Stefan Arnborg, Jens Lagergren and Detlef Seese",
|
| 98 |
+
"venue": "In Journal of Algorithms 12.2, 1991, pp. 308\u2013340",
|
| 99 |
+
"url": null
|
| 100 |
+
}
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"2": {
|
| 104 |
+
"title": "\u201cPolynomial algorithms for kernels in comparability, permutation and -free graphs\u201d",
|
| 105 |
+
"author": "Moncef Abbas and Youcef Saoula",
|
| 106 |
+
"venue": "In 4OR, 2005, pp. 217\u2013225",
|
| 107 |
+
"url": null
|
| 108 |
+
}
|
| 109 |
+
},
|
| 110 |
+
{
|
| 111 |
+
"3": {
|
| 112 |
+
"title": "\u201cStable Matching with Uncertain Pairwise Preferences\u201d",
|
| 113 |
+
"author": "Haris Aziz et al.",
|
| 114 |
+
"venue": "In Theoretical Computer Science, 2022",
|
| 115 |
+
"url": null
|
| 116 |
+
}
|
| 117 |
+
},
|
| 118 |
+
{
|
| 119 |
+
"4": {
|
| 120 |
+
"title": "\u201cPerfect graphs, kernels, and cores of cooperative games\u201d",
|
| 121 |
+
"author": "E. Boros and V. Gurvich",
|
| 122 |
+
"venue": "In Discrete Mathematics 306.19, 2006, pp. 2336\u20132354",
|
| 123 |
+
"url": null
|
| 124 |
+
}
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"5": {
|
| 128 |
+
"title": "\u201cDigraphs\u201d, Springer Monographs in Mathematics",
|
| 129 |
+
"author": "J\u00f8rgen Bang-Jensen and Gregory Z. Gutin",
|
| 130 |
+
"venue": "London: Springer London, 2009",
|
| 131 |
+
"url": null
|
| 132 |
+
}
|
| 133 |
+
},
|
| 134 |
+
{
|
| 135 |
+
"6": {
|
| 136 |
+
"title": "\u201cConvex-Round and Concave-Round Graphs\u201d",
|
| 137 |
+
"author": "J\u00f8rgen Bang-Jensen, Jing Huang and Anders Yeo",
|
| 138 |
+
"venue": "In SIAM Journal on Discrete Mathematics 13.2, 2000, pp. 179\u2013193",
|
| 139 |
+
"url": null
|
| 140 |
+
}
|
| 141 |
+
},
|
| 142 |
+
{
|
| 143 |
+
"7": {
|
| 144 |
+
"title": "\u201cNim, A Game with a Complete Mathematical Theory\u201d",
|
| 145 |
+
"author": "Charles L. Bouton",
|
| 146 |
+
"venue": "In Annals of Mathematics 3.1/4",
|
| 147 |
+
"url": null
|
| 148 |
+
}
|
| 149 |
+
},
|
| 150 |
+
{
|
| 151 |
+
"8": {
|
| 152 |
+
"title": "\u201cA combinatorial problem in logic\u201d",
|
| 153 |
+
"author": "C. Berge and A.Ramachandra Rao",
|
| 154 |
+
"venue": "In Discrete Mathematics 17.1, 1977, pp. 23\u201326",
|
| 155 |
+
"url": null
|
| 156 |
+
}
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"9": {
|
| 160 |
+
"title": "\u201cGraph Structure and Monadic Second-Order Logic: A Language-Theoretic Approach\u201d, Encyclopedia of Mathematics and Its Applications",
|
| 161 |
+
"author": "Bruno Courcelle and Joost Engelfriet",
|
| 162 |
+
"venue": "Cambridge University Press, 2012",
|
| 163 |
+
"url": null
|
| 164 |
+
}
|
| 165 |
+
},
|
| 166 |
+
{
|
| 167 |
+
"10": {
|
| 168 |
+
"title": "\u201cOn the computational complexity of finding a kernel\u201d Out of print, but see author\u2019s website: https://users.encs.concordia.ca/~chvatal/kernel.html, 1973",
|
| 169 |
+
"author": "V\u00e1clav Chv\u00e1tal",
|
| 170 |
+
"venue": null,
|
| 171 |
+
"url": null
|
| 172 |
+
}
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"11": {
|
| 176 |
+
"title": "\u201cThe structure of claw-free graphs\u201d",
|
| 177 |
+
"author": "Maria Chudnovsky and Paul Seymour",
|
| 178 |
+
"venue": "In Surveys in Combinatorics 2005 327, London Mathematical Society Lecture Note Series",
|
| 179 |
+
"url": null
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"12": {
|
| 184 |
+
"title": "\u201cClaw-free graphs. VII. Quasi-line graphs\u201d",
|
| 185 |
+
"author": "Maria Chudnovsky and Paul Seymour",
|
| 186 |
+
"venue": "In Journal of Combinatorial Theory, Series B 102.6, 2012, pp. 1267\u20131294",
|
| 187 |
+
"url": null
|
| 188 |
+
}
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"13": {
|
| 192 |
+
"title": "\u201cKernels in directed graphs: a poison game\u201d",
|
| 193 |
+
"author": "P. Duchet and H. Meyniel",
|
| 194 |
+
"venue": "In Discrete Mathematics 115.1, 1993, pp. 273\u2013276",
|
| 195 |
+
"url": null
|
| 196 |
+
}
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"14": {
|
| 200 |
+
"title": "\u201cOn Kernels, Defaults and Even Graphs\u201d",
|
| 201 |
+
"author": "Yannis Dimopoulos, Vangelis Magirou and Christos H. Papadimitriou",
|
| 202 |
+
"venue": "In Annals of Mathematics and Artificial Intelligence 20.1, 1997, pp. 1\u201312",
|
| 203 |
+
"url": null
|
| 204 |
+
}
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"15": {
|
| 208 |
+
"title": "\u201cArgumentation, paradox and kernels in directed graphs\u201d, 2012",
|
| 209 |
+
"author": "Sjur Dyrkolbotn",
|
| 210 |
+
"venue": "URL: http://www.ii.uib.no/~michal/Sjur-tese.pdf",
|
| 211 |
+
"url": null
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"16": {
|
| 216 |
+
"title": "\u201cOn the Kernel and Related Problems in Interval Digraphs\u201d",
|
| 217 |
+
"author": "Mathew C. Francis, Pavol Hell and Dalu Jacob",
|
| 218 |
+
"venue": "In 32nd International Symposium on Algorithms and Computation (ISAAC 2021) 212, Leibniz International Proceedings in Informatics (LIPIcs)",
|
| 219 |
+
"url": null
|
| 220 |
+
}
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"17": {
|
| 224 |
+
"title": "\u201cSolving the Weighted Stable Set Problem in Claw-Free Graphs via Decomposition\u201d",
|
| 225 |
+
"author": "Yuri Faenza, Gianpaolo Oriolo and Gautier Stauffer",
|
| 226 |
+
"venue": "In Journal of the ACM 61.4",
|
| 227 |
+
"url": null
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"18": {
|
| 232 |
+
"title": "\u201cPlanar kernel and Grundy with , , are NP-complete\u201d",
|
| 233 |
+
"author": "Aviezri S. Fraenkel",
|
| 234 |
+
"venue": "In Discrete Applied Mathematics 3.4, 1981, pp. 257\u2013262",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"19": {
|
| 240 |
+
"title": "\u201cCombinatorial Game Theory Foundations Applied to Digraph Kernels\u201d",
|
| 241 |
+
"author": "Aviezri S. Fraenkel",
|
| 242 |
+
"venue": "In Electronic Journal of Combinatorics 4.2, 1997",
|
| 243 |
+
"url": null
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"20": {
|
| 248 |
+
"title": "\u201cThe List Chromatic Index of a Bipartite Multigraph\u201d",
|
| 249 |
+
"author": "Fred Galvin",
|
| 250 |
+
"venue": "In Journal of Combinatorial Theory, Series B 63.1, 1995, pp. 153\u2013158",
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"21": {
|
| 256 |
+
"title": "\u201cThe Stable Marriage Problem. Structure and Algorithms\u201d, Foundations of Computing",
|
| 257 |
+
"author": "Dan Gusfield and Robert W. Irving",
|
| 258 |
+
"venue": "MIT Press, 1989",
|
| 259 |
+
"url": null
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"22": {
|
| 264 |
+
"title": "\u201cCounting and Enumerating Independent Sets with Applications to Combinatorial Optimization Problems\u201d",
|
| 265 |
+
"author": "Frank Gurski and Carolin Rehs",
|
| 266 |
+
"venue": "In Mathematical Methods of Operations Research 91.3, 2020, pp. 439\u2013463",
|
| 267 |
+
"url": null
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"23": {
|
| 272 |
+
"title": "\u201cCollege Admissions and the Stability of Marriage\u201d",
|
| 273 |
+
"author": "D. Gale and L.S. Shapley",
|
| 274 |
+
"venue": "In The American Mathematical Monthly 69.1",
|
| 275 |
+
"url": null
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"24": {
|
| 280 |
+
"title": "\u201cKernels in Planar Digraphs\u201d",
|
| 281 |
+
"author": "Gregory Gutin, Ton Kloks, Chuan Min Lee and Anders Yeo",
|
| 282 |
+
"venue": "In Journal of Computer and System Sciences 71.2, 2005, pp. 174\u2013184",
|
| 283 |
+
"url": null
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"25": {
|
| 288 |
+
"title": "\u201cDomination When the Stars Are Out\u201d",
|
| 289 |
+
"author": "Danny Hermelin, Matthias Mnich, Erik Jan Van Leeuwen and Gerhard Woeginger",
|
| 290 |
+
"venue": "In ACM Transactions on Algorithms 15.2",
|
| 291 |
+
"url": null
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"26": {
|
| 296 |
+
"title": "\u201cClasses of Intersection Digraphs with Good Algorithmic Properties\u201d",
|
| 297 |
+
"author": "Lars Jaffke, O-joung Kwon and Jan Arne Telle",
|
| 298 |
+
"venue": "In 39th International Symposium on Theoretical Aspects of Computer Science (STACS 2022) 219, Leibniz International Proceedings in Informatics (LIPIcs)",
|
| 299 |
+
"url": null
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"27": {
|
| 304 |
+
"title": "\u201cKernels in perfect line-graphs\u201d",
|
| 305 |
+
"author": "Fr\u00e9d\u00e9ric Maffray",
|
| 306 |
+
"venue": "In Journal of Combinatorial Theory, Series B 55.1, 1992, pp. 1\u20138",
|
| 307 |
+
"url": null
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"28": {
|
| 312 |
+
"title": "\u201cOn Cliques in Graphs\u201d",
|
| 313 |
+
"author": "J.W. Moon and L. Moser",
|
| 314 |
+
"venue": "In Israel Journal of Mathematics 3.1, 1965, pp. 23\u201328",
|
| 315 |
+
"url": null
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"29": {
|
| 320 |
+
"title": "\u201cThreshold Graphs and Related Topics\u201d 56, Annals of Discrete Mathematics",
|
| 321 |
+
"author": "N.V.R. Mahadev and U.N. Peled",
|
| 322 |
+
"venue": "Amsterdam New York: Elsevier, 1995",
|
| 323 |
+
"url": null
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"30": {
|
| 328 |
+
"title": "\u201cOn the recognition of fuzzy circular interval graphs\u201d",
|
| 329 |
+
"author": "Gianpaolo Oriolo, Ugo Pietropaoli and Gautier Stauffer",
|
| 330 |
+
"venue": "In Discrete Mathematics 312.8, 2012, pp. 1426\u20131435",
|
| 331 |
+
"url": null
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"31": {
|
| 336 |
+
"title": "\u201cCounting the Number of Independent Sets in Chordal Graphs\u201d",
|
| 337 |
+
"author": "Yoshio Okamoto, Takeaki Uno and Ryuhei Uehara",
|
| 338 |
+
"venue": "In Journal of Discrete Algorithms 6.2, 2008, pp. 229\u2013242",
|
| 339 |
+
"url": null
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"32": {
|
| 344 |
+
"title": "\u201cPerfect graphs with polynomially computable kernels\u201d",
|
| 345 |
+
"author": "Ad\u00e8le Pass-Lanneau, Ayumi Igarashi and Fr\u00e9d\u00e9ric Meunier",
|
| 346 |
+
"venue": "In Discrete Applied Mathematics 272, 2020, pp. 69\u201374",
|
| 347 |
+
"url": null
|
| 348 |
+
}
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"33": {
|
| 352 |
+
"title": "\u201cInformation System on Graph Classes and Their Inclusions\u201d",
|
| 353 |
+
"author": "H.N. Ridder et al.",
|
| 354 |
+
"venue": "URL: https://www.graphclasses.org/",
|
| 355 |
+
"url": null
|
| 356 |
+
}
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"34": {
|
| 360 |
+
"title": "\u201cCharacterization and Linear-Time Detection of Minimal Obstructions to Concave-Round Graphs and the Circular-Ones Property\u201d",
|
| 361 |
+
"author": "Mart\u00edn D. Safe",
|
| 362 |
+
"venue": "In Journal of Graph Theory 93.2, 2020, pp. 268\u2013298",
|
| 363 |
+
"url": null
|
| 364 |
+
}
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"35": {
|
| 368 |
+
"title": "\u201cEnumerating the kernels of a directed graph with no odd circuits\u201d",
|
| 369 |
+
"author": "Jayme L. Szwarcfiter and Guy Chaty",
|
| 370 |
+
"venue": "In Information Processing Letters 51.3, 1994, pp. 149\u2013153",
|
| 371 |
+
"url": null
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"36": {
|
| 376 |
+
"title": "\u201cMatrix Characterizations of Circular-Arc Graphs.\u201d",
|
| 377 |
+
"author": "Alan Tucker",
|
| 378 |
+
"venue": "In Pacific Journal of Mathematics 39.2",
|
| 379 |
+
"url": null
|
| 380 |
+
}
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"37": {
|
| 384 |
+
"title": "\u201cThe Complexity of Computing the Permanent\u201d",
|
| 385 |
+
"author": "L.G. Valiant",
|
| 386 |
+
"venue": "In Theoretical Computer Science 8.2, 1979, pp. 189\u2013201",
|
| 387 |
+
"url": null
|
| 388 |
+
}
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"38": {
|
| 392 |
+
"title": "\u201cTheory of Games and Economic Behavior\u201d",
|
| 393 |
+
"author": "John Neumann and Oskar Morgenstern",
|
| 394 |
+
"venue": "Princeton University Press, 1944",
|
| 395 |
+
"url": null
|
| 396 |
+
}
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"39": {
|
| 400 |
+
"title": "\u201cNP Is as Easy as Detecting Unique Solutions\u201d",
|
| 401 |
+
"author": "L.G. Valiant and V.V. Vazirani",
|
| 402 |
+
"venue": "In Theoretical Computer Science 47, 1986, pp. 85\u201393",
|
| 403 |
+
"url": null
|
| 404 |
+
}
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"40": {
|
| 408 |
+
"title": "\u201cFinding kernels or solving SAT\u201d",
|
| 409 |
+
"author": "Micha\u0142 Walicki and Sjur Dyrkolbotn",
|
| 410 |
+
"venue": "In Journal of Discrete Algorithms 10, 2012, pp. 146\u2013164",
|
| 411 |
+
"url": null
|
| 412 |
+
}
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"41": {
|
| 416 |
+
"title": "\u201c\u00dcber eine Anwendung der Mengenlehre auf die Theorie des Schachspiels\u201d",
|
| 417 |
+
"author": "Ernst Zermelo",
|
| 418 |
+
"venue": "In Proceedings of the Fifth International Congress of Mathematicians 2, 1913, pp. 501\u2013504",
|
| 419 |
+
"url": null
|
| 420 |
+
}
|
| 421 |
+
}
|
| 422 |
+
],
|
| 423 |
+
"url": "http://arxiv.org/html/2202.04476v3"
|
| 424 |
+
}
|
20240323/2202.12074v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2205.11100v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2206.08898v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2207.04913v2.json
ADDED
|
@@ -0,0 +1,262 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge",
|
| 3 |
+
"abstract": "Domain generalization aims at learning a universal model that performs well on unseen target domains, incorporating knowledge from multiple source domains.\nIn this research, we consider the scenario where different domain shifts occur among conditional distributions of different classes across domains.\nWhen labeled samples in the source domains are limited, existing approaches are not sufficiently robust.\nTo address this problem,\nwe propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG),\ninspired by the concept of distributionally robust\noptimization. We encourage robustness over conditional distributions within class-specific Wasserstein uncertainty sets and optimize the worst-case performance of a classifier over\nthese uncertainty sets.\nWe further develop a test-time adaptation module leveraging optimal transport to quantify the relationship between the unseen target domain and source domains to make adaptive inference for target data.\nExperiments on the Rotated MNIST, PACS and the VLCS datasets demonstrate that our method could effectively balance the robustness and discriminability in challenging generalization scenarios.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In many practical learning applications, labeled training data are only available from fragmented source domains. It is thus a challenge to learn a robust model for future data that could come from a new domain, with unknown domain shift. One commonly acknowledged solution to this challenge is domain generalization [1 ###reference_b1###], which aims at learning a model that generalizes well to target domains based on available training data from multiple source domains and in a total absence of prior knowledge about the target domain. A surge of popularity has been seen recently in the application of domain generalization in various fields, such as computer vision [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###], natural processing [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###], and reinforcement learning [13 ###reference_b13###], etc.\nNumerous methods have been developed for learning a generalizable model by exploiting the available data from the source domains,\nwhere the shifts across these source domains are implicitly assumed to be representative of the target shift that we will meet at test time.\nThe well-known approaches include learning domain-invariant feature representations through kernel functions [1 ###reference_b1###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###], or by distribution alignment [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###], or in an adversarial manner [23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###, 8 ###reference_b8###, 26 ###reference_b26###].\nThe learned invariance across source domains, however, may not be typical if the unseen target shift is of extreme magnitude.\nIn this case, forcing distributions to align in a common representation space may result in a biased model that overfits the source domains,\nand only performs well for target domains that are similar to certain source domains.\nInstead, to explicitly model unseen target domain shifts, meta-learning-based domain generalization methods like MLDG [13 ###reference_b13###]\ndivides the source domains into non-overlapping meta-train and meta-test domains, which fails to hedge against the possible target shift beyond the distribution shifts observed in source domains.\nAlso, these approaches require sufficient source training data to make good meta-optimization within each mini-batch.\nPossible domain shift could also been modeled by enhancing the diversity of data\nbased on some data augmentations [27 ###reference_b27###], generating data in an adversarial manner [28 ###reference_b28###, 7 ###reference_b7###, 29 ###reference_b29###] or constructing sample interpolation [30 ###reference_b30###, 31 ###reference_b31###].\nLearning with limited labeled original samples in this way will weaken their performance, since the new generated data will dominate and the domain shift caused by the artificial data manipulations will largely determine the generalization performance.\nIn this work, we propose a domain generalization framework to explicitly model the unknown target domain shift under limited source knowledge,\nby extrapolating beyond the domain shifts among multiple source domains in a probabilistic setting via distributionally robust optimization (DRO) [32 ###reference_b32###].\nTo model the shifts between training and test distributions,\nDRO usually assumes the testing data is generated by a perturbed distribution of the underlying data distribution, and the perturbation is bounded explicitly by an uncertainty set. It then optimizes the worst-case performance of a model over the uncertainty set to hedge against the perturbations [33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###].\nThe uncertainty set contains distributions that belong to a non-parametric distribution family, which is typically distributions centered around the empirical training distributions defined via some divergence metrics, e.g., Kullback\u2013Leibler divergence [32 ###reference_b32###], or other -divergences\n[37 ###reference_b37###, 38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###], or Wasserstein distance [33 ###reference_b33###, 41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###], etc.\nThese pre-defined distance constraints of uncertainty sets will confer robustness against a set of perturbations of distributions.\nAs a promising tool that connects distribution uncertainty and model robustness, DRO has been incorporated into domain generalization in some works.\nVolpi et al. [7 ###reference_b7###] augmented the data distribution in an adversarial manner, which appends some new perturbed samples from the fictitious worst-case target distributions at each iteration, and the model is updated on these samples.\nDuchi et al. [40 ###reference_b40###] solves the DRO to learn a model within a -divergence uncertainty set and learns the best radius of the set in a heuristic way by validating on part of the training data.\nLet denote the input feature and denote the label.\nWhile the studies by [7 ###reference_b7###] and [40 ###reference_b40###] discuss the distributional shifts directly in the joint distribution ,\nour work takes a distinct approach by decomposing the joint distribution and establishing class-specific distributional uncertainty sets, which enables us to manage possible varying degrees of distributional perturbations for each class in a more explicit manner.\nWhen labeled training source samples are limited in source domains, the distributional perturbations for each class could vary widely. In such a scenario,\nunifying these varying degrees of domain perturbations within a single shared uncertainty set as have been done for the joint distribution is potentially overlooking the inherent differences among classes.\nAs such, to explicitly examine the distributional shift among classes, we decompose the joint distribution and address each part independently. Our primary focus lies in managing the class-conditional shift [45 ###reference_b45###], under the assumption that there is no shift in the class prior distribution, i.e., the distribution stays consistent across all source domains. Furthermore, we also illustrate how our research can be readily expanded to situations that involve a shift in the class prior distribution.\nTo be more specific, we encode the domain perturbations of each class within a class-specific Wasserstein uncertainty set.\nCompared with Kullback\u2013Leibler divergence, Wasserstein distance is well-known for its ability to measure divergence between distributions defined on different probability space, which may happen when the limited samples have no overlap.\nWhile the classic DRO with one Wasserstein uncertainty set can be formulated into a tractable convex problem [46 ###reference_b46###],\ntractability results for DRO with multiple Wasserstein uncertainty sets for each class are also available [34 ###reference_b34###].\nIt is crucial to set appropriate uncertainty sets based on training data from multiple source domains for the success of DRO, since they control the conservatism of the optimization problem [43 ###reference_b43###]. A richer uncertainty set may contain more true target distributions with higher confidence, but comes with more conservative and less practical solution.\nMore precise uncertainty set incentivizes higher complexity and more difficult solution.\nTherefore, uncertainty sets should be large enough to guarantee robustness, but not so large as to overlap with each other.\nWe manage to control the discriminability among class-specific uncertainty sets with additional constraints while ensuring the largest possible uncertainty.\nWhen performing classification on data from target domains,\nwe conduct a test-time adaptation strategy to further reduce the domain shift and make inference for testing data adaptively.\nWe employ optimal transport weights to apply the optimal classifier learned from the source distributions on the test sample, which we prove to be equivalent to transporting the target samples to source domains before making the prediction.\nIn summary, our main contributions include:\nWe propose a domain generalization framework that solves the Wasserstein distributionally robust optimization problem to learn a robust model over multiple source domains, where class-conditional domain shifts are formulated in a probabilistic setting within class-specific Wasserstein uncertainty sets.\nTo improve upon the original Wasserstein distributionally robust optimization method with heuristic magnitude of uncertainty, we design a constraint that balances robustness and discriminability of uncertainty sets.\nWe develop a test-time optimal transport-based adaptation module to make adaptive and robust inferences for samples in the target domain.\nA generalization bound on the target classifier is presented. Experiments on several multi-domain vision datasets show the effectiveness of our proposed framework comparing with the state-of-the-arts."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Preliminaries and Problem Setup",
|
| 15 |
+
"text": "For the common -class classification problem, denote the feature space as and the label space as\n.\nLet be the prediction function which assigns each feature vector as class with likelihood . Here denotes the probability simplex. Based on the prediction function , the corresponding classifier maps each feature vector to the class (ties are broken arbitrarily). In the following, we will also use to represent the classifier.\nGiven training samples drawn i.i.d from the true data-generating distribution over ,\nwe denote the empirical class-conditional distributions for each class as\nHere, indicates a Dirac measure centered at and is the indicator function.\nTherefore, can be viewed as the empirical distribution for training samples within the class .\nIn light of [34 ###reference_b34###, 35 ###reference_b35###], the test distribution of each class is likely to be distributions centered around the empirical class-conditional distribution within the uncertainty set defined using, for example, the Wasserstein distance.\nThe Wasserstein distance [47 ###reference_b47###, 48 ###reference_b48###] of order between any two distributions and ,\nis defined as:\nwhere is the collection of all joint distributions with the first and second marginals being the distribution and , respectively.\nWe consider the Wasserstein distance of order , and the corresponding norm is set as Euclidean distance.\nThus, we have the test distribution of each class belongs to the following set:\nwhere denotes the radius of the uncertainty set and denotes the set of all probability distributions over .\nA robust classifier (or equivalently the prediction function ) can be obtained by solving the following minimax optimization problem:\nwhere is the total risk of the classifier on certain distributions .\nThe inner maximum problem refers to the worst-case risk over uncertainty sets .\nSuppose is an optimal solution pair to the saddle-point problem (3 ###reference_###), then are called the least favorable distributions (LFDs) [49 ###reference_b49###],\nand induces the optimal classifier that minimizes the worst-case risk.\nThe likelihood that a sample is misclassified is usually taken as the risk, i.e., for any sample with real label . Specially, when assuming the simple case with equal class prior distributions for all classes, the total risk of misclassifying data from all classes is\nHowever, in a more general classification problem, to compensate for the possible class imbalance scenario, a series of class-weighting methods assign different weights to misclassifying samples from different classes [50 ###reference_b50###, 51 ###reference_b51###].\nOne of the most natural approaches is to incorporate the class prior distributions of each class into the risk function [52 ###reference_b52###, 53 ###reference_b53###] as\nwhich is a general form of (4 ###reference_###).\nIn domain generalization problems, we have access to source domains , with training samples from the -th source domain drawn i.i.d from the joint distribution on .\nThe goal is to learn a robust classifier that performs well on the unseen target domain ,\nwhich contains instances from the joint distribution .\nFor each class , denote the empirical class-conditional distributions in source domain and target domain as and , respectively.\nInstead of constructing uncertainty sets relative to the empirical (training) distributions of a single domain as in the classic DRO formulation,\nwe need to set the uncertainty sets using distributions from multiple source domains, which is detailed in the next section.\n###figure_1### ###figure_2### ###figure_3###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Wasserstein Distributionally Robust Domain Generalization",
|
| 21 |
+
"text": "In this section, we present our proposed framework for domain generalization that leverages the empirical distributions from multiple source domains as shown in Figure 1a ###reference_sf1###, and the process of distributionally robust optimization is shown in Figure 1b ###reference_sf2###.\nThe adaptive inference for the target domain is shown in Figure 1c ###reference_sf3###.\nHere we show binary classification for simplicity.\nMore specifically, we first extrapolate the class-conditional source distributions to a Wasserstein uncertainty set for each class. Figure 1a ###reference_sf1### illustrates the construction of uncertainty sets of two classes. Their closeness is further controlled by the parameter to ensure discriminability. A convex solver then solves the distributionally robust optimization over these uncertainty sets, obtaining\nthe least favorable distributions (LFDs), which are represented as probability mass vectors depicted in Figure 1b ###reference_sf2###. Figure\n1c ###reference_sf3### shows the inference process for target samples, where optimal transport [54 ###reference_b54###] is used to re-weight LFDs adaptively.\nDetails of the construction of uncertainty sets and the additional Wasserstein constraints could be found in Sections III-A ###reference_### and III-B ###reference_###. Section III-C ###reference_### discusses the re-formulation of the Wasserstein robust optimization. Adaptive inference for samples in the target domain is presented in section III-D ###reference_###. In III-E ###reference_###, we further analyze the generalization bound of the proposed framework."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "III-A Construction of Uncertainty Sets",
|
| 27 |
+
"text": "We construct the uncertainty sets controlled\nmainly by two terms: the reference distribution that represents the center of the uncertainty set,\nand the radius parameter that controls the size of the set, i.e., an upper bound of the divergence between the reference distribution and other distributions in the set.\nWe use Wasserstein barycenter [55 ###reference_b55###] as the reference distribution, which is the average of multiple given distributions and is capable of leveraging the inherent geometric relations among them [20 ###reference_b20###]. Given empirical class-conditional distributions for each class from different source domains, the Wasserstein barycenter for class is defined as\nwhich could be a proxy of the reference distribution for each uncertainty set.\nSuppose each barycenter supports on samples uniformly, i.e., , where are the barycenter samples for class ,\nthen (6 ###reference_###) only optimizes over the locations .\nTo ensure that the uncertainty sets are large enough\nto avoid misclassification for unseen target samples, the maximum of all Wasserstein distances between class-conditional distributions of each source domain and the barycenter ,\nis used as the radius for each class :\nIn this way, we can construct the Wasserstein uncertainty set of radius centered around for each class following (2 ###reference_###):\nFigure 1a ###reference_sf1### shows the construction process of the uncertainty sets for two classes."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "III-B Balance Robustness and Discriminability",
|
| 33 |
+
"text": "When the source training samples are limited,\nthe class-conditional distributions may\nvary widely in practice.\nIn this situation, the radius computed from (7 ###reference_###) tends to be overly large,\nand the uncertainty sets of different classes may overlap with each other, leading to indistinguishable LFDs for optimization problem (3 ###reference_###).\nAs shown in Figure 2 ###reference_###, overlap between each pair of class-specific uncertainty sets exist as the sum of their radius is larger than the Wasserstein distance between the corresponding barycenters.\n###figure_4### Discriminability of LFDs is necessary since this leads to a well-defined problem of (3 ###reference_###), which indirectly controls the discriminability of data from different classes.\nWe add one more constraint to obtain significantly different LFDs that are discriminable, characterized by the Wasserstein distance between each pair of LFDs within classes:\nwhere is the threshold that indicates the discriminability, which could be tuned on a validation domain.\nIn this way, robustness is ensured by large enough Wasserstein uncertainty sets, and the threshold guarantees discriminability among the uncertainty sets."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-C Distributionally Robust Optimization",
|
| 39 |
+
"text": "Incorporating the constraints (9 ###reference_###) into (3 ###reference_###), we aim to solve the following minimax problem\nWe establish the following theorem, stating a convex approximation of problem (10 ###reference_###).\nSuppose the Wasserstein barycenter for each class as defined in (6 ###reference_###) is supported on samples.\nLet be the union of the support of which contains samples in total.\nThe class prior distributions of each class is denoted as .\nDenote each distribution within the uncertainty set as .\nLet be the pairwise distance matrix of samples, , be the coupling matrix between and ,\nand be the coupling matrix between any two distributions in different classes.\nWhen using the Wasserstein metric of order 2, the least favorable distributions of the problem (10 ###reference_###) could be obtained by solving:\nand the optimal prediction function of (10 ###reference_###) satisfies\n for any .\nThe constraints on restrict each target class-conditional distribution to its respective uncertainty set of radius .\nThe constraints on restrict the Wasserstein distance between each pair of class-conditional distributions in the target domain following (9 ###reference_###).\nBased on the above theorem,\nthe classification for any sample in the sample set is given by .\nThe proof can be found in\nthe supplementary material."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.4",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-D Adaptive Inference by Test-time Adaptation",
|
| 45 |
+
"text": "Since the barycenters are the weighted average of distributions from multiple source domains, the barycenter samples in the support set could be viewed as samples from a generalized source domain denoted as .\nFor any sample in , the likelihood that it is assigned to each class could be decided based on by a non-parametric inference method such as KNN [35 ###reference_b35###]. When making predictions for samples from an unseen target domain , the domain shift between and needs to be considered. We adopt optimal transport to reduce the domain shift adaptively by the following test-time adaptation process.\nSuppose and are the empirical marginal distributions of the feature vectors from the generalized source domain and a target domain , respectively.\nDenote the coupling matrix of transporting from target to the generalized source distribution using optimal transport [54 ###reference_b54###]\nas , where each vector , , represents the transported mass from\nthe -th target sample to each of the barycenter samples.\nIn most optimal transport-based domain adaptation methods, each target sample , , is first transported to in the generalized source domain by the barycentric mapping:\nthen having its label inferred based on the classifier learned on the labeled samples.\nInstead of such a two-step process, we propose an equivalent single-step inference process. The following proposition states the equivalence,\nand the proof can be found in the supplementary.\nGiven the coupling matrix . Suppose we transport the target sample from the empirical target distribution to the generalized source domain empirical distribution \nby the barycentric mapping as shown in (12 ###reference_###), and obtain the class likelihood by re-weighting of all the samples using the weight function .\nThen the resulting classifier is equivalent to directly re-weighting LFDs on the barycenter samples using the coupling matrix. The equivalent classification result is:\nThis proposition illustrates that domain difference between target domain and generalized source domain can be eliminated by adaptively applying the coupling matrix in the inference stage, without actually transporting the target samples to the generalized source domain.\nDenote the LFDs for all classes as .\nBased on Proposition 1 ###reference_position1###,\nthe predicted class likelihood of each target sample can be written as\nwhere .\nThe algorithm is summarized in Algorithm 1 ###reference_###.\nFurther adding the optimal-transport based adaptive inference leads to our complete framework Wasserstein Distributionally Robust Domain Generalization (WDRDG)."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.5",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-E Generalization Analysis",
|
| 51 |
+
"text": "We further analyze the generalization risk of our proposed method.\nOur analysis\nconsiders the domain shift between the target domain and the generalized source domain.\nBased on (14 ###reference_###), the classification decision for the test sample in the target domain is based on the weighted average\nConsider a binary classification problem with label set .\nLet represents the prediction vector of belonging to either classes.\nThe true labeling function is denoted as .\nConsidering the simple case that all classes are balanced,\nthe expected risk that the correct label is not accepted for samples in any distribution is denoted as .\nWe now present the following theorem stating the generalization bound.\nSuppose the distributionally robust prediction function learned from the sample set is -Lipschitz continuous for some .\nLet and be the probability distributions for the generalized source and target domain, respectively.\nThen the risk on the target distribution follows\nwhere .\nThe first term is the risk on the barycenter distribution .\nThe second term shows that the divergence between the barycenter distribution and target distribution, measured by the Wasserstein distance (of order ).\nThis theorem shows that the generalization risk on the target domain is affected by\nthe Wasserstein distance between the barycenter distribution and the target distribution, which represents the gap between the generalized source domain and the target domain.\nBy applying the concentration property of the Wasserstein distance [56 ###reference_b56###], we can measure the generalization risk based on empirical Wasserstein distances similar to Theorem 3 in [57 ###reference_b57###]. Under the assumption of Theorem 2 ###reference_orem2###, if the two probability distributions and satisfy inequality [56 ###reference_b56###],\nthen for any and , there exists some constant depending on such that for any and ,\nwith probability at least the following holds for the risk on the target domain\nHere denotes the dimension of the feature space. The last term illustrates the importance of getting more labeled samples from the generalized source domain.\nThis result show that reducing the Wasserstein distance between the barycenters and target distributions will lead to tighter upper bound for the risk of the learned model on the target domain.\nTherefore, it provides a theoretical motivation to our design of the test-time adaptation, which reduces such domain gap by optimal transport.\nDetails of the proof could be found in the supplementary material."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "IV Experiments",
|
| 57 |
+
"text": ""
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "IV-A Datasets",
|
| 63 |
+
"text": "To evaluate the effectiveness of our proposed domain generalization framework,\nwe conduct experiments on three datasets:\nthe VLCS [58 ###reference_b58###] dataset, the PACS [59 ###reference_b59###] dataset, and the Rotated MNIST [60 ###reference_b60###] dataset.\nVLCS dataset\nThis domain generalization benchmark contains images from four image classification datasets: PASCAL VOC2007 (V), LabelMe (L), Caltech-101 (C), and SUN09 (S),\ndenoted as domains , , , and , respectively [61 ###reference_b61###].\nThere are five common categories: bird, car, chair, dog and person.\nPACS dataset\nThe PACS dataset contains images of four domains: Photos (P), Art painting (A), Cartoon (C) and Sketch (S) [59 ###reference_b59###].\nThere are in total types of objects in this classification task, i.e., dog, elephant, giraffe, guitar, horse, house, and person.\nRotated MNIST dataset\nWe constructed the Rotated MNIST dataset with four domains, , , and following the common settings [60 ###reference_b60###].\n denotes the domain containing original images from the MNIST dataset, and\nwe rotated each image in the original MNIST dataset by , and degrees clockwise, respectively to generate the dataset of , and .\nSome example images are shown in Figure 3 ###reference_###.\nWe randomly sampled\namong digits .\n###figure_5### ###figure_6### ###figure_7### ###figure_8###"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "IV-B Experimental Configuration",
|
| 69 |
+
"text": "We evaluate each method on the multi-domain datasets via the leave-one-domain-out experiments,\ni.e., we train a model based on the source domains and test on the hold-out unseen target domain.\nFor example, when the target domain is , then the transfer direction is from three source domains to a target domain, i.e., , and the average of test accuracies of four cross-domain experiments is taken as the final result.\nWe mainly consider the scenario when we have only limited labeled data from the source domains.\nTherefore, for each domain, we randomly select some images to form the training set, validation set and test set for the cross-domain classification.\nThe training set is used to learn robust models, whose\nparameters are then selected on the validation set aggregated by the validation sets of each source domain.\nThe performance of a model is finally evaluated on the test set.\nDetails of the sets for training, validation and testing are as follows:\nTraining set For each domain, we randomly select up to images. To be more specific, we set the number of training images per category per domain to be a number in the set .\nThe training data from the three source domains form the training set.\nValidation set For each domain, images per category are randomly selected. The validation data from the three source domains form the validation set.\nTest set We sample images per category for each domain. The sampled test data from the unseen target domain form the test set.\nWe repeat the above sampling process times for all datasets, so that the experiments are based on trials.The average results of all trials are finally reported.\nFeatures pretrained on neural networks are taken as our input.\nFor the Rotated MNIST dataset, the Resnet-18 [62 ###reference_b62###] pretrained on the ImageNet is used to extract -dimensional features as the inputs.\nFor the VLCS dataset, the pretrained -dimensional DeCAF features [63 ###reference_b63###] are employed as the inputs of our algorithm following previous works [22 ###reference_b22###, 64 ###reference_b64###].\nFor the PACS dataset, we use the ImageNet pre-trained AlexNet [65 ###reference_b65###] as the backbone network to extract the -dimensional features."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.3",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "IV-C Baseline Methods",
|
| 75 |
+
"text": "We compare our proposed WDRDG framework with the following baseline methods in terms of the average classification accuracy.\nAll methods for comparison are summarized as below:\nKNN: We adopt the combination of training instances from all source domains to train the nearest neighbor classifier.\nMDA [19 ###reference_b19###]: We apply Multidomain Discriminant Analysis (MDA) to learn domain-invariant feature transformation\nthat is applicable when changes across domains.\n1-NN is adopted as a classifier to the learned feature transformations for classification.\nCIDG [18 ###reference_b18###]: Conditional Invariant Domain Generalization (CIDG) finds a linear transformation to minimize the total domain scatter with regard to each class-conditional distributions. The learned features are also classified using KNN.\nMLDG [13 ###reference_b13###]: We consider this meta-learning based domain generalization method as another baseline which models unseen target domain shift.\nA simple two-layer network is trained to learn the classifier.\nFor our proposed WDRDG framework,\nwe use the CVXPY package [66 ###reference_b66###] to solve the distributionally robust optimization problem.\nThe Wasserstein distance of order 2 is used for all experiments, and calculated with the POT package [67 ###reference_b67###]."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.4",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "IV-D Results and Discussion",
|
| 81 |
+
"text": "###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### In this section, we present the results for domain generalization on all three datasets.\nWhen each domain serves as the target domain, the results are shown in Figure 4 ###reference_###, with the plotted lines representing the average performance over trials and the shaded area representing the corresponding standard deviation.\nFor the VLCS dataset, we report the results in the first row in Figure 4 ###reference_###.\nIn all four cases when each domain serves as the unseen target domain, our method achieves better classification accuracy and standard deviation than other methods when the training sample size for each class is very few, i.e., 2, 3 or 5.\nThe advantage over MLDG then levels off as the sample size reaches to over 10 per class.\nThe performance improvement against MLDG reaches as high as , , , with only 2 training samples for each class when the target domain is PASCAL VOC2007, LabelMe, Caltech-101 and SUN09, respectively, which confirms that our method is efficient for few-shot cases.\nThe second row of Figure 4 ###reference_### reports the classification accuracy results for the PACS dataset.\nThe proposed WDRDG achieves the best results in accuracy and standard deviation when the target domain is Art Painting, Cartoon, or Sketch using different training sample size, and MLDG outperforms WDRDG when the target domain is Photos with the sample size 15 for each class.\nWDRDG outperforms MLDG by up to , , , for each target domain when the training sample size is 2.\nThis validates the effect of our method when the training sample size is limited.\nThe improvement of WDRDG over other methods on the PACS dataset is relatively larger compared with the improvements on the VLCS dataset. This improvement is especially obvious over MDA and CIDG when the target domain is Sketch, shown in the fourth column of the second row in Figure 4 ###reference_###.\nThis may because that the differences among domains are greater in PACS where the image styles are obviously different compared with in VLCS, where samples from different domains are real-world images collected from different perspectives or scales. This demonstrates that our WDRDG could better handle scenarios with larger unseen domain shift.\nThe results for the Rotated MNIST dataset in the third row of Figure 4 ###reference_### also yield similar conclusions. As the training sample size increases, almost all methods converges to the same accuracy for different target domain.\nWhen the training sample size is smaller, i.e., the training sample per class for each source domain is , the advantage of our proposed framework is more obvious.\nWDRDG outperforms MLDG by , , , when the training sample size is 2 for each class for target domain , , , and , respectively.\nWhen the training sample size is big, e.g., the training sample per class for each source domain is , even simple KNN method performs well.\nThis is consistent with the analysis in the above two datasets.\nFigure 5 ###reference_### reports the average performance of different target domains on the three datasets.\nOverall, our method is the most stable under different numbers of training samples, with narrower shadow band of standard deviation.\nAs the size of training samples gets bigger, all methods have the tendency of performing better.\nOn the PACS and Rotated MNIST dataset, WDRDG achieves the best average performance under different training sample size compared with other methods.\nOn the VLCS dataset, WDRDG also achieves pretty good accuracies with smaller standard deviation.\nIn addition, our method shows more advantage over others in few-shot settings. When given training samples are limited to less than 10 (i.e., 2, 3, 5, 7 in our experiments) per class, WDRDG provides at least , , better generalization ability than others on the VLCS, PACS and Rotated MNIST dataset, respectively."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.5",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-E Ablation Study for the Test-time Adaptation",
|
| 87 |
+
"text": "To explore the effectiveness of the test-time adaptation based on optimal transport, we compare our framework with and without this adaptive inference module. For the non-adaptive inference, the nearest neighbor for any test sample from the target domain is found by the simple 1-NN over barycenter samples.\nWe compare the results of using training sample size of per class for each source domain.\nFrom the results in Table I ###reference_###, II ###reference_###, and III ###reference_### for VLCS, PACS and Rotated MNIST dataset, respectively,\nwe can make several observations.\nOur WDRDG framework with the adaptive inference module results in better average performance for all three datasets, with up to higher mean accuracy for the VLCS dataset with 5 training samples per class, performance improvement for the PACS dataset with 15 training samples per class, and improvements for the Rotated MNIST dataset with 15 training samples per class.\nNote that when the target domain is Sketch on the PACS dataset, the improvements are especially obvious compared with other targets, reaching , , and when the training sample size for each class is , respectively.\nSimilar results could be found on the Rotated MNIST dataset when the target domain is or when the training sample size per class is or , with up to performance improvements. This improvement is more obvious compared with other targets or , which obtains up to performance improvements using the adaptive inference module.\nOne thing they share in common is these target domains are more different with given source domains, which shows larger unseen distribution shifts.\nThis validates the robustness of our adaptive inference module for even harder, unseen target domains."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.6",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "IV-F Analysis of Imbalanced Classes among Source Domains",
|
| 93 |
+
"text": "###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### In previous experiments, we actually assume the training sample size per class in the source domains are the same under the setting of no class prior distribution shift, i.e., the distribution of is the same across all source domains.\nTo show the feasibility of extending our framework to scenarios with class prior distribution shift, we further conduct experiments when the categories in source domains are imbalanced, i.e., there are shifts among of different domains.\nWe randomly sample the training sample size for each class from on the Rotated MNIST dataset here. The distribution of sample number for each class when each domain is chosen as the target domain is shown in Figure 6 ###reference_###.\nThere are cases when different classes have similar sample number, e.g., in source domain when the target domain is , or in source domain when the target domain is .\nIn other source domains, different classes may have quite different number of samples, e.g., in source domain when target domain is , or in source domain when target domain is .\nWe compare our framework WDRDG with other methods, and the results are shown in Figure 7 ###reference_###.\nWhen the target domain is , our method achieves similar accuracies with MLDG but with smaller deviation, while in other cases WDRDG outperforms other baselines by at least\n, , when the target domain is , , , respectively.\nOur framework outperforms other methods on average with smaller standard deviation, which validates the generalization ability of our framework when the source domains have class prior distribution shift."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Conclusion",
|
| 99 |
+
"text": "In this research, we proposed a novel framework for domain generalization to enhance model robustness when labeled training data of source domains are limited.\nWe formulated the distributional shifts for each class with class-specific Wasserstein uncertainty sets and optimized the model over the worst-case distributions residing in the uncertainty sets via distributionally robust optimization.\nTo reduce the difference between source and target domains, we proposed a test-time domain adaptation module through optimal transport to make adaptive inference for unseen target data. We found that our domain generalization framework with this adaptive inference module works better when target domains are more different compared with source domains.\nExperimental results on Rotated MNIST, PACS and VLCS datasets demonstrate that our proposed WDRDG framework could learn a robust model for unseen target domains based on limited source data, and we also showed that its advantage is more obvious in few-shot settings.\nTo perfect this work in the future, we would study the usage of class priors in constructing more realistic uncertainty sets, and explore measurable relationship among source domains to better leverage the source distributions to model possible target distributions."
|
| 100 |
+
}
|
| 101 |
+
],
|
| 102 |
+
"appendix": [],
|
| 103 |
+
"tables": {
|
| 104 |
+
"1": {
|
| 105 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>The effect of the optimal transport-based test-time adaptation (TTA) for adaptive inference on the VLCS dataset. WDRDG with the TTA module results in better performance when using a different number of training samples.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T1.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.1.1.1.1.1\">training sample size/class</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.1.1.1.2.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"5\" id=\"S4.T1.1.1.1.3\">Target</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.2.2.1\">V</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.2.2.2\">L</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.2.2.3\">C</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.2.2.4\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.2.2.5\">Average</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.3.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.1.3.1.1.1\">5</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.1.2\">WDRDG (w/o. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.1.3\">0.516</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.1.4\">0.372</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.1.5\">0.554</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.1.6\">0.356</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.3.1.7\">0.450</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.2.1\">WDRDG (w. TTA)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.2.2\">0.582</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.2.3\">0.448</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.2.4\">0.494</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.2.5\">0.458</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.4.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.4.2.6.1\">0.496</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.5.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.1.5.3.1.1\">10</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.5.3.2\">WDRDG (w/o. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.5.3.3\">0.540</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.5.3.4\">0.402</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.5.3.5\">0.516</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.5.3.6\">0.334</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.5.3.7\">0.448</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.6.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.4.1\">WDRDG (w. TTA)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.4.2\">0.546</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.4.3\">0.410</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.4.4\">0.546</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.4.5\">0.450</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.6.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.6.4.6.1\">0.488</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S4.T1.1.7.5.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T1.1.7.5.1.1\">15</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.7.5.2\">WDRDG (w/o. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.7.5.3\">0.510</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.7.5.4\">0.378</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.7.5.5\">0.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.7.5.6\">0.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.7.5.7\">0.487</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.8.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.8.6.1\">WDRDG (w. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.8.6.2\">0.568</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.8.6.3\">0.438</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.8.6.4\">0.564</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.8.6.5\">0.440</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.8.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.8.6.6.1\">0.503</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 106 |
+
"capture": "TABLE I: The effect of the optimal transport-based test-time adaptation (TTA) for adaptive inference on the VLCS dataset. WDRDG with the TTA module results in better performance when using a different number of training samples.\n"
|
| 107 |
+
},
|
| 108 |
+
"2": {
|
| 109 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>The effect of the optimal transport-based test-time adaptation (TTA) for adaptive inference on the PACS dataset. WDRDG with the TTA module results in better performance when using a different number of training samples.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T2.1.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.1.1.1\">training sample size/class</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.1.1.1.2.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"5\" id=\"S4.T2.1.1.1.3\">Target</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.2.2.1\">P</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.2.2.2\">A</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.2.2.3\">C</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.2.2.4\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.2.2.5\">Average</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.1.3.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.1.3.1.1.1\">5</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.2\">WDRDG (w/o. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.3\">0.504</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.4\">0.350</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.5\">0.471</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.6\">0.237</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.3.1.7\">0.391</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.4.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.1\">WDRDG (w. TTA)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.2\">0.514</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.3\">0.403</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.4\">0.441</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.5\">0.399</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.4.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.4.2.6.1\">0.439</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.1.5.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.1.5.3.1.1\">10</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.5.3.2\">WDRDG (w/o. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.5.3.3\">0.559</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.5.3.4\">0.374</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.5.3.5\">0.480</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.5.3.6\">0.259</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.5.3.7\">0.418</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.6.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.4.1\">WDRDG (w. TTA)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.4.2\">0.556</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.4.3\">0.421</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.4.4\">0.519</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.4.5\">0.409</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.6.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.6.4.6.1\">0.476</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.7.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S4.T2.1.7.5.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.1.7.5.1.1\">15</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.7.5.2\">WDRDG (w/o. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.7.5.3\">0.549</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.7.5.4\">0.404</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.7.5.5\">0.491</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.7.5.6\">0.251</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.7.5.7\">0.424</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.8.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.8.6.1\">WDRDG (w. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.8.6.2\">0.533</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.8.6.3\">0.477</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.8.6.4\">0.475</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.8.6.5\">0.462</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.8.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.8.6.6.1\">0.487</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 110 |
+
"capture": "TABLE II: The effect of the optimal transport-based test-time adaptation (TTA) for adaptive inference on the PACS dataset. WDRDG with the TTA module results in better performance when using a different number of training samples."
|
| 111 |
+
},
|
| 112 |
+
"3": {
|
| 113 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>The effect of the optimal transport-based test-time adaptation (TTA) for adaptive inference on the Rotated MNIST dataset. WDRDG with the TTA module results in better performance when using a different number of training samples.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.4.5.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.4.5.1.1.1\">training sample size/class</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.4.5.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.4.5.1.2.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"5\" id=\"S4.T3.4.5.1.3\">Target</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.4.4.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.4.4.5\">Average</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.6.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.6.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.4.6.1.1.1\">5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.6.1.2\">WDRDG (w/o. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.6.1.3\">0.593</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.6.1.4\">0.640</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.6.1.5\">0.577</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.6.1.6\">0.553</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.6.1.7\">0.591</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.7.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.2.1\">WDRDG (w. TTA)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.2.2\">0.647</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.2.3\">0.732</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.2.4\">0.663</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.2.5\">0.613</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.7.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.7.2.6.1\">0.664</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.8.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.8.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.4.8.3.1.1\">10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.8.3.2\">WDRDG (w/o. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.8.3.3\">0.567</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.8.3.4\">0.690</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.8.3.5\">0.647</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.8.3.6\">0.557</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.8.3.7\">0.615</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.9.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.9.4.1\">WDRDG (w. TTA)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.9.4.2\">0.654</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.9.4.3\">0.753</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.9.4.4\">0.703</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.9.4.5\">0.633</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.9.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.9.4.6.1\">0.686</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.10.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T3.4.10.5.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T3.4.10.5.1.1\">15</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.10.5.2\">WDRDG (w/o. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.10.5.3\">0.567</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.10.5.4\">0.653</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.10.5.5\">0.677</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.10.5.6\">0.533</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.10.5.7\">0.608</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.11.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.4.11.6.1\">WDRDG (w. TTA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.4.11.6.2\">0.660</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.4.11.6.3\">0.753</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.4.11.6.4\">0.721</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.4.11.6.5\">0.636</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.4.11.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.11.6.6.1\">0.693</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 114 |
+
"capture": "TABLE III: The effect of the optimal transport-based test-time adaptation (TTA) for adaptive inference on the Rotated MNIST dataset. WDRDG with the TTA module results in better performance when using a different number of training samples."
|
| 115 |
+
}
|
| 116 |
+
},
|
| 117 |
+
"image_paths": {
|
| 118 |
+
"1(a)": {
|
| 119 |
+
"figure_path": "2207.04913v2_figure_1(a).png",
|
| 120 |
+
"caption": "(a) Construction of uncertainty sets.\nFigure 1: An overview of our WDRDG framework, consisting of three components: (a) Wasserstein uncertainty set construction for each class based on the empirical Wasserstein barycenters and radius obtained from given source domains. One constraint is added to control the discriminability of LFDs; (b) distributionally robust optimization to solve for the least favorable distributions;\n(c) adaptive inference for target testing samples based on probability mass on LFDs and coupling matrix from optimal transportation between barycenter samples and target samples.",
|
| 121 |
+
"url": "http://arxiv.org/html/2207.04913v2/x1.png"
|
| 122 |
+
},
|
| 123 |
+
"1(b)": {
|
| 124 |
+
"figure_path": "2207.04913v2_figure_1(b).png",
|
| 125 |
+
"caption": "(b) Distributionally robust optimization.\nFigure 1: An overview of our WDRDG framework, consisting of three components: (a) Wasserstein uncertainty set construction for each class based on the empirical Wasserstein barycenters and radius obtained from given source domains. One constraint is added to control the discriminability of LFDs; (b) distributionally robust optimization to solve for the least favorable distributions;\n(c) adaptive inference for target testing samples based on probability mass on LFDs and coupling matrix from optimal transportation between barycenter samples and target samples.",
|
| 126 |
+
"url": "http://arxiv.org/html/2207.04913v2/x2.png"
|
| 127 |
+
},
|
| 128 |
+
"1(c)": {
|
| 129 |
+
"figure_path": "2207.04913v2_figure_1(c).png",
|
| 130 |
+
"caption": "(c) Adaptive inference using optimal transport.\nFigure 1: An overview of our WDRDG framework, consisting of three components: (a) Wasserstein uncertainty set construction for each class based on the empirical Wasserstein barycenters and radius obtained from given source domains. One constraint is added to control the discriminability of LFDs; (b) distributionally robust optimization to solve for the least favorable distributions;\n(c) adaptive inference for target testing samples based on probability mass on LFDs and coupling matrix from optimal transportation between barycenter samples and target samples.",
|
| 131 |
+
"url": "http://arxiv.org/html/2207.04913v2/x3.png"
|
| 132 |
+
},
|
| 133 |
+
"2": {
|
| 134 |
+
"figure_path": "2207.04913v2_figure_2.png",
|
| 135 |
+
"caption": "Figure 2: Comparison between \u03b8i*+\u03b8j*superscriptsubscript\ud835\udf03\ud835\udc56superscriptsubscript\ud835\udf03\ud835\udc57\\theta_{i}^{*}+\\theta_{j}^{*}italic_\u03b8 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT + italic_\u03b8 start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT and the Wasserstein distance W2\u2062(Bi*,Bj*)subscript\ud835\udc4a2superscriptsubscript\ud835\udc35\ud835\udc56superscriptsubscript\ud835\udc35\ud835\udc57W_{2}(B_{i}^{*},B_{j}^{*})italic_W start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_B start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT , italic_B start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ) for all 10101010 unique pairs (i,j)\ud835\udc56\ud835\udc57(i,j)( italic_i , italic_j ) among all 5555 classes of the VLCS dataset.\nThe sum of uncertainty radius of any two classes is larger than the Wasserstein distance between the corresponding barycenters. The oversized radius will lead to overlapping class-specific uncertainty sets, and the distributions within them will be indistinguishable.",
|
| 136 |
+
"url": "http://arxiv.org/html/2207.04913v2/x4.png"
|
| 137 |
+
},
|
| 138 |
+
"3(a)": {
|
| 139 |
+
"figure_path": "2207.04913v2_figure_3(a).png",
|
| 140 |
+
"caption": "(a) r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT\nFigure 3: Visualization of example images from four domains of the Rotated MNIST dataset with rotation angles of 0\u2218superscript00^{\\circ}0 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 30\u2218superscript3030^{\\circ}30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 60\u2218superscript6060^{\\circ}60 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 90\u2218superscript9090^{\\circ}90 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT.",
|
| 141 |
+
"url": "http://arxiv.org/html/2207.04913v2/x5.png"
|
| 142 |
+
},
|
| 143 |
+
"3(b)": {
|
| 144 |
+
"figure_path": "2207.04913v2_figure_3(b).png",
|
| 145 |
+
"caption": "(b) r30subscript\ud835\udc5f30r_{30}italic_r start_POSTSUBSCRIPT 30 end_POSTSUBSCRIPT\nFigure 3: Visualization of example images from four domains of the Rotated MNIST dataset with rotation angles of 0\u2218superscript00^{\\circ}0 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 30\u2218superscript3030^{\\circ}30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 60\u2218superscript6060^{\\circ}60 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 90\u2218superscript9090^{\\circ}90 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT.",
|
| 146 |
+
"url": "http://arxiv.org/html/2207.04913v2/x6.png"
|
| 147 |
+
},
|
| 148 |
+
"3(c)": {
|
| 149 |
+
"figure_path": "2207.04913v2_figure_3(c).png",
|
| 150 |
+
"caption": "(c) r60subscript\ud835\udc5f60r_{60}italic_r start_POSTSUBSCRIPT 60 end_POSTSUBSCRIPT\nFigure 3: Visualization of example images from four domains of the Rotated MNIST dataset with rotation angles of 0\u2218superscript00^{\\circ}0 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 30\u2218superscript3030^{\\circ}30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 60\u2218superscript6060^{\\circ}60 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 90\u2218superscript9090^{\\circ}90 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT.",
|
| 151 |
+
"url": "http://arxiv.org/html/2207.04913v2/x7.png"
|
| 152 |
+
},
|
| 153 |
+
"3(d)": {
|
| 154 |
+
"figure_path": "2207.04913v2_figure_3(d).png",
|
| 155 |
+
"caption": "(d) r90subscript\ud835\udc5f90r_{90}italic_r start_POSTSUBSCRIPT 90 end_POSTSUBSCRIPT\nFigure 3: Visualization of example images from four domains of the Rotated MNIST dataset with rotation angles of 0\u2218superscript00^{\\circ}0 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 30\u2218superscript3030^{\\circ}30 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 60\u2218superscript6060^{\\circ}60 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT, 90\u2218superscript9090^{\\circ}90 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT.",
|
| 156 |
+
"url": "http://arxiv.org/html/2207.04913v2/x8.png"
|
| 157 |
+
},
|
| 158 |
+
"4(a)": {
|
| 159 |
+
"figure_path": "2207.04913v2_figure_4(a).png",
|
| 160 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 161 |
+
"url": "http://arxiv.org/html/2207.04913v2/x9.png"
|
| 162 |
+
},
|
| 163 |
+
"4(b)": {
|
| 164 |
+
"figure_path": "2207.04913v2_figure_4(b).png",
|
| 165 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 166 |
+
"url": "http://arxiv.org/html/2207.04913v2/x10.png"
|
| 167 |
+
},
|
| 168 |
+
"4(c)": {
|
| 169 |
+
"figure_path": "2207.04913v2_figure_4(c).png",
|
| 170 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 171 |
+
"url": "http://arxiv.org/html/2207.04913v2/x11.png"
|
| 172 |
+
},
|
| 173 |
+
"4(d)": {
|
| 174 |
+
"figure_path": "2207.04913v2_figure_4(d).png",
|
| 175 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 176 |
+
"url": "http://arxiv.org/html/2207.04913v2/x12.png"
|
| 177 |
+
},
|
| 178 |
+
"4(e)": {
|
| 179 |
+
"figure_path": "2207.04913v2_figure_4(e).png",
|
| 180 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 181 |
+
"url": "http://arxiv.org/html/2207.04913v2/x13.png"
|
| 182 |
+
},
|
| 183 |
+
"4(f)": {
|
| 184 |
+
"figure_path": "2207.04913v2_figure_4(f).png",
|
| 185 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 186 |
+
"url": "http://arxiv.org/html/2207.04913v2/x14.png"
|
| 187 |
+
},
|
| 188 |
+
"4(g)": {
|
| 189 |
+
"figure_path": "2207.04913v2_figure_4(g).png",
|
| 190 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 191 |
+
"url": "http://arxiv.org/html/2207.04913v2/x15.png"
|
| 192 |
+
},
|
| 193 |
+
"4(h)": {
|
| 194 |
+
"figure_path": "2207.04913v2_figure_4(h).png",
|
| 195 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 196 |
+
"url": "http://arxiv.org/html/2207.04913v2/x16.png"
|
| 197 |
+
},
|
| 198 |
+
"4(i)": {
|
| 199 |
+
"figure_path": "2207.04913v2_figure_4(i).png",
|
| 200 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 201 |
+
"url": "http://arxiv.org/html/2207.04913v2/x17.png"
|
| 202 |
+
},
|
| 203 |
+
"4(j)": {
|
| 204 |
+
"figure_path": "2207.04913v2_figure_4(j).png",
|
| 205 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 206 |
+
"url": "http://arxiv.org/html/2207.04913v2/x18.png"
|
| 207 |
+
},
|
| 208 |
+
"4(k)": {
|
| 209 |
+
"figure_path": "2207.04913v2_figure_4(k).png",
|
| 210 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 211 |
+
"url": "http://arxiv.org/html/2207.04913v2/x19.png"
|
| 212 |
+
},
|
| 213 |
+
"4(l)": {
|
| 214 |
+
"figure_path": "2207.04913v2_figure_4(l).png",
|
| 215 |
+
"caption": "Figure 4: Performance comparison for the VLCS, PACS and Rotated MNIST dataset under different size of training samples per class. Each row shows the results for a dataset, and each column shows the generalization result for a certain target domain. Average performance of five methods are represented by different colors, and the corresponding shadow shows the standard deviation of 5 experimental trials.\nOur WDRDG framework outperforms KNN, MDA and CIDG with higher accuracy and smaller standard deviation.\nAlso, it has more advantage over MLDG especially when the source training sample size is limited. For example, WDRDG outperforms MLDG by up to\n46.79%percent46.7946.79\\%46.79 % when the target domain is Caltech-101 in the VLCS dataset, by up to 20.95%percent20.9520.95\\%20.95 % for target domain Art Painting in the PACS dataset,\nand by up to 20.71%percent20.7120.71\\%20.71 % for target domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT in the Rotated MNIST dataset with training sample size of 2 for each class.",
|
| 216 |
+
"url": "http://arxiv.org/html/2207.04913v2/x20.png"
|
| 217 |
+
},
|
| 218 |
+
"5(a)": {
|
| 219 |
+
"figure_path": "2207.04913v2_figure_5(a).png",
|
| 220 |
+
"caption": "(a) VLCS\nFigure 5: Average generalization performance of different methods on the VLCS, PACS and Rotated MNIST dataset. As the training sample size increases, all methods obtain better performance. Our WDRDG framework outperforms other baselines, especially in few-shot settings. When the sample size is less than 10 per class, WDRDG provides at least 3.75%percent3.753.75\\%3.75 %, 4.73%percent4.734.73\\%4.73 %, 3.86%percent3.863.86\\%3.86 % better generalization ability than others on the VLCS, PACS and Rotated MNIST dataset, respectively.",
|
| 221 |
+
"url": "http://arxiv.org/html/2207.04913v2/x21.png"
|
| 222 |
+
},
|
| 223 |
+
"5(b)": {
|
| 224 |
+
"figure_path": "2207.04913v2_figure_5(b).png",
|
| 225 |
+
"caption": "(b) PACS\nFigure 5: Average generalization performance of different methods on the VLCS, PACS and Rotated MNIST dataset. As the training sample size increases, all methods obtain better performance. Our WDRDG framework outperforms other baselines, especially in few-shot settings. When the sample size is less than 10 per class, WDRDG provides at least 3.75%percent3.753.75\\%3.75 %, 4.73%percent4.734.73\\%4.73 %, 3.86%percent3.863.86\\%3.86 % better generalization ability than others on the VLCS, PACS and Rotated MNIST dataset, respectively.",
|
| 226 |
+
"url": "http://arxiv.org/html/2207.04913v2/x22.png"
|
| 227 |
+
},
|
| 228 |
+
"5(c)": {
|
| 229 |
+
"figure_path": "2207.04913v2_figure_5(c).png",
|
| 230 |
+
"caption": "(c) Rotated MNIST\nFigure 5: Average generalization performance of different methods on the VLCS, PACS and Rotated MNIST dataset. As the training sample size increases, all methods obtain better performance. Our WDRDG framework outperforms other baselines, especially in few-shot settings. When the sample size is less than 10 per class, WDRDG provides at least 3.75%percent3.753.75\\%3.75 %, 4.73%percent4.734.73\\%4.73 %, 3.86%percent3.863.86\\%3.86 % better generalization ability than others on the VLCS, PACS and Rotated MNIST dataset, respectively.",
|
| 231 |
+
"url": "http://arxiv.org/html/2207.04913v2/x23.png"
|
| 232 |
+
},
|
| 233 |
+
"6(a)": {
|
| 234 |
+
"figure_path": "2207.04913v2_figure_6(a).png",
|
| 235 |
+
"caption": "Figure 6: Visualization of random sample size for each class in source domains when a different domain serves as the target domain in the Rotated MNIST dataset. For each source domain, the number of samples for different classes are shown in different colors. There are cases when different classes have similar sample number, e.g., Class 1 and 2 of source domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT when target domain is r30subscript\ud835\udc5f30r_{30}italic_r start_POSTSUBSCRIPT 30 end_POSTSUBSCRIPT, and also cases when different classes have quite different number of samples, e.g., in source domain r90subscript\ud835\udc5f90r_{90}italic_r start_POSTSUBSCRIPT 90 end_POSTSUBSCRIPT when target domain is r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 236 |
+
"url": "http://arxiv.org/html/2207.04913v2/x24.png"
|
| 237 |
+
},
|
| 238 |
+
"6(b)": {
|
| 239 |
+
"figure_path": "2207.04913v2_figure_6(b).png",
|
| 240 |
+
"caption": "Figure 6: Visualization of random sample size for each class in source domains when a different domain serves as the target domain in the Rotated MNIST dataset. For each source domain, the number of samples for different classes are shown in different colors. There are cases when different classes have similar sample number, e.g., Class 1 and 2 of source domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT when target domain is r30subscript\ud835\udc5f30r_{30}italic_r start_POSTSUBSCRIPT 30 end_POSTSUBSCRIPT, and also cases when different classes have quite different number of samples, e.g., in source domain r90subscript\ud835\udc5f90r_{90}italic_r start_POSTSUBSCRIPT 90 end_POSTSUBSCRIPT when target domain is r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 241 |
+
"url": "http://arxiv.org/html/2207.04913v2/x25.png"
|
| 242 |
+
},
|
| 243 |
+
"6(c)": {
|
| 244 |
+
"figure_path": "2207.04913v2_figure_6(c).png",
|
| 245 |
+
"caption": "Figure 6: Visualization of random sample size for each class in source domains when a different domain serves as the target domain in the Rotated MNIST dataset. For each source domain, the number of samples for different classes are shown in different colors. There are cases when different classes have similar sample number, e.g., Class 1 and 2 of source domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT when target domain is r30subscript\ud835\udc5f30r_{30}italic_r start_POSTSUBSCRIPT 30 end_POSTSUBSCRIPT, and also cases when different classes have quite different number of samples, e.g., in source domain r90subscript\ud835\udc5f90r_{90}italic_r start_POSTSUBSCRIPT 90 end_POSTSUBSCRIPT when target domain is r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 246 |
+
"url": "http://arxiv.org/html/2207.04913v2/x26.png"
|
| 247 |
+
},
|
| 248 |
+
"6(d)": {
|
| 249 |
+
"figure_path": "2207.04913v2_figure_6(d).png",
|
| 250 |
+
"caption": "Figure 6: Visualization of random sample size for each class in source domains when a different domain serves as the target domain in the Rotated MNIST dataset. For each source domain, the number of samples for different classes are shown in different colors. There are cases when different classes have similar sample number, e.g., Class 1 and 2 of source domain r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT when target domain is r30subscript\ud835\udc5f30r_{30}italic_r start_POSTSUBSCRIPT 30 end_POSTSUBSCRIPT, and also cases when different classes have quite different number of samples, e.g., in source domain r90subscript\ud835\udc5f90r_{90}italic_r start_POSTSUBSCRIPT 90 end_POSTSUBSCRIPT when target domain is r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT.",
|
| 251 |
+
"url": "http://arxiv.org/html/2207.04913v2/x27.png"
|
| 252 |
+
},
|
| 253 |
+
"7": {
|
| 254 |
+
"figure_path": "2207.04913v2_figure_7.png",
|
| 255 |
+
"caption": "Figure 7: The performance of WDRDG and four compared methods on the Rotated MNIST dataset with different class prior distributions across source domains. WDRDG outperforms other baselines by at least\n0.51%percent0.510.51\\%0.51 %, 3.90%percent3.903.90\\%3.90 %, 1.53%percent1.531.53\\%1.53 % when the target domain is r0subscript\ud835\udc5f0r_{0}italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, r30subscript\ud835\udc5f30r_{30}italic_r start_POSTSUBSCRIPT 30 end_POSTSUBSCRIPT, r60subscript\ud835\udc5f60r_{60}italic_r start_POSTSUBSCRIPT 60 end_POSTSUBSCRIPT, respectively, and achieves similar accuracies with MLDG but with smaller deviation when the target domain is r90subscript\ud835\udc5f90r_{90}italic_r start_POSTSUBSCRIPT 90 end_POSTSUBSCRIPT.",
|
| 256 |
+
"url": "http://arxiv.org/html/2207.04913v2/x28.png"
|
| 257 |
+
}
|
| 258 |
+
},
|
| 259 |
+
"validation": true,
|
| 260 |
+
"references": [],
|
| 261 |
+
"url": "http://arxiv.org/html/2207.04913v2"
|
| 262 |
+
}
|
20240323/2208.08270v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2209.01426v2.json
ADDED
|
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Space Filling Curves for Coverage Path Planning with Online Obstacle Avoidance",
|
| 3 |
+
"abstract": "The paper presents a strategy for robotic exploration problem using Space-Filling curves (SFC). The strategy plans a path that avoids unknown obstacles while ensuring complete coverage of the free space in region of interest. The region of interest is first tessellated, and the tiles/cells are connected using a SFC pattern. A robot follows the SFC to explore the entire area. However, obstacles can block the systematic movement of the robot. We overcome this problem by determining an alternate path online that avoids the blocked cells while ensuring all the accessible cells are visited at least once. The proposed strategy chooses next waypoint based on the graph connectivity of the cells and the obstacle encountered so far. It is online, exhaustive and works in situations demanding non-uniform coverage. The completeness of the strategy is proved and its desirable properties are discussed with examples.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In 1878, George Cantor demonstrated that an interval can be mapped bijectively onto . Later, G. Peano discovered one such mapping that is also continuous and surjective; the image of such mapping when parameterized in the interval to higher dimensions () is known as Space-Filling Curve (SFC). More SFCs were later discovered by E. Moore, H. Lebesgue, W. Sierpinski, and G. Polya [1 ###reference_b1###, 2 ###reference_b2###]. Space-filling curves (SFCs) possess intriguing properties. They are self-similar, meaning each curve consists of sub-curves similar to the entire curve. SFCs are also surjective maps, ensuring they cover every point in . Furthermore, they preserve locality, so points close in remain close in .\nDue to the above properties, SFCs have been used in many applications - data collection from sensor network [3 ###reference_b3###, 4 ###reference_b4###]; ordering meshes of complex geometries [2 ###reference_b2###] and many more. An approximate solution to Travelling Salesman Problem (TSP) can be found using Hilbert\u2019s Space Filling curve [5 ###reference_b5###]. Space Filling Tree analogous to SFCs having tree-like structure have been proposed for sampling-based path planning [6 ###reference_b6###], as opposed to traditional methods like Rapid-exploring Random Trees (RRTs) [7 ###reference_b7###].\nIn robotic exploration problem, a single or group of robotic agents are deployed to search, survey or gather information about a specific region while avoiding obstacles. Robotic exploration is one of the sub-problems of the larger Coverage Planning Problem (CPP), wherein the agent is bestowed with the task of visiting all points in a 2D area or volume [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]. Numerous approaches for CPP already exists - Graph based, Grid based, Neural-Network based, Cellular decomposition based [8 ###reference_b8###]. Each of these approaches can be used for robotic exploration problem.\nSFCs have been utilized for robotic exploration, with their suitability stemming from the advantageous properties they possess. Exploration using SFCs is time complete and robust to failure [12 ###reference_b12###]. Hilbert\u2019s curve have been shown to be more efficient in time of travel / cost of task than lawnmower\u2019s path [13 ###reference_b13###]. The exploration strategies developed for 2D using SFCs can be extended to 3D since similar grammar exists for their construction in both dimensions [2 ###reference_b2###]. The generalized version of SFCs aka Generalized SFC (GSFC) can span irregular quadrilateral and triangles [2 ###reference_b2###]. SFCs can be easily used in non-uniform coverage scenarios requiring some parts to be searched more rigorously than others.\nThere is sizable literature dealing with SFC based robotic exploration. [12 ###reference_b12###] suggests the use of a swarm of mobile robots for exploration, each mobile robot following a SFC. This approach is efficient in terms of energy, robust to failures and assures coverage in finite time, but considers an obstacle-free environment. [14 ###reference_b14###] proposes a novel method for non-uniform coverage planning for UAVs. [13 ###reference_b13###] takes the work further and uses the Hilbert curve for non-uniform coverage which is optimal as opposed to existing methods not using SFC. [15 ###reference_b15###] introduced a UAV system to conduct aerial localization that uses Moore\u2019s SFC. However, [14 ###reference_b14###, 13 ###reference_b13###, 15 ###reference_b15###] does not consider obstacles.\n[16 ###reference_b16###] introduced path planning approach for SFCs with obstacles and proved the optimality for specific obstacle configurations, due to the existence of a Hamiltonian path for such obstacle configurations. [17 ###reference_b17###] introduced an algorithm to construct SFCs for sensor networks with holes. The algorithm can be used for motion planning with obstacles while using SFC. However, the solutions proposed in [16 ###reference_b16###, 17 ###reference_b17###] require the knowledge of obstacles before starting the exploration.\n[18 ###reference_b18###] formulated an online obstacle evasion strategy for Hilbert curve with only one waypoint blocked by an obstacle. [19 ###reference_b19###] builds upon [16 ###reference_b16###] and suggests a strategy capable of evading two neighboring waypoints on a Hilbert curve. [20 ###reference_b20###] talks about online obstacle avoidance for aerial vehicle covering a region using the Sierpinski curve, but the obstacles need to block disjoint cells.\nEarly work on the topic did not consider obstacles in the environment. Later works considered obstacles but required knowledge about the obstacles or were restricted to a particular class of obstacles and SFC used. Hence, the necessary next step is to develop an online strategy that can navigate around arbitrary obstacles while employing any SFC and this is addressed by the proposed strategy, The contributions of the paper are:\nThe proposed strategy is online and can avoid obstacles on he go using the grid connectivity of the SFC cells.\nThe strategy is applicable to any SFC and any number of obstacles placed randomly in the environment.\nThe strategy can be used for non-uniform coverage.\nThe rest of this paper is organized into five sections: Section II formally introduces SFCs. Section III formulates the problem and the proposed strategy is presented. Section IV discusses the properties of the strategy and the implemented examples. Section V concludes the paper and talks about the limitations and possible future work."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Preliminaries",
|
| 15 |
+
"text": "Space Filling curves (SFCs) are defined as follows, \nDefinition [2 ###reference_b2###]: Given a mapping , with as image, is called a SFC, if has Jordan content (area, volume, ..) larger than 0.\nIn this paper, approximations of SFCs are used. Approximate SFCs are constructed by dividing and connecting the centers of the cells by a continuous piecewise straight line. The way the divisions are connected depends on the grammar of the SFC being used.\n###figure_1### Figure 1 ###reference_### shows a square divided N times and the centers are connected to generate Hilbert curve. In this case, hilbert curve is mapped onto a square in . The degree to which the square is divided is quantified by the term \u201citeration\u201d (also referred to as order). As iteration goes to infinity, the SFC visits every point in the square. This explains the surjective mapping property of SFCs. A rigorous mathematical explanation can be found in [1 ###reference_b1###, 2 ###reference_b2###]. The approximate SFCs are simply referred to as SFC in literature. Moreover, approximate SFCs are sufficient for the exploration given that the robotic agent has a search radius enough to cover the entire cell (area or volume) while at its center. Furthermore, we can see in Fig 2 ###reference_### that iteration is constructed by rotation and translation of iteration , which demonstrates the self-similar property. The grammatical way of constructing SFCs is based on this fact.\nSFCs are H\u00f6lder continuous mappings. Given,\nand is SFC with , if,\n###figure_2### The function is said to be H\u00f6lder continuous with exponent , if a constant exists for all . The RHS can be interpreted as the distance between and , while LHS is the distance between points in SFC. Here, the distance between points on SFC is bounded by the distance between the points they are mapped from, explaining the Locality Preservation.\nLastly, any problem in Q mapped by SFC can be transformed into a problem in . And often solving the problem in is straightforward than solving it in the original space . This technique is referred to as General Space-Filling Heuristics (GSFH)."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Problem Formulation & Proposed Solution",
|
| 21 |
+
"text": "We consider a robotic agent with sensing radius that needs to explore a 2D region. This region is partitioned into cells so that each cell remains within the agent\u2019s sensor footprint when positioned at the cell\u2019s center. The robot traverses the cells sequentially following the SFC pattern. It identifies obstacles while moving into the cell containing them and stores the obstacle information for future reference.\nWe elaborate our strategy using the Hilbert curve, but the strategy can be used for any SFC. Consider a square with area , having static obstacles at unknown locations. The number of obstacles are not known and they can have arbitrary shapes and sizes. The iteration () of Hilbert curve is determined such that the agent can scan an entire cell while at its center. Mathematically the iteration can be found out by comparing sensing radius and diagonal of cell for Hilbert curve,\nThe center of each cell is identified as waypoint and are numbered from to starting from one of the corner cells. Without loss of genrality, we assume it to be the left-bottom cell. The goal of the robotic agent is to start at and visit all the unblocked reachable waypoints.\nHere, GSFH simplifies a 2D scanning problem into a 1D routing task by employing SFC. When addressing the 1D scenario which involves covering all waypoints along a line, we adopt a greedy approach by sequentially moving from left to right, ensuring all points to the left are visited before proceeding right. This method avoids the need for backtracking to reach any previously skipped waypoint on the left, thereby ensuring an efficient path is followed. In a 1D routing task with obstacle (or obstacles), the agent will be limited to traversing only the segment of the line from its starting point.\nIn our scenario, traversing the 1D route left to right or visiting waypoints to sequentially the agent effectively navigates the 2D region following the SFC pattern. The existence of an obstacle does not pose a significant problem, as the agent can deviate from the SFC pattern and identify an alternate route through other connected waypoints.\n###figure_3### The proposed strategy considers the waypoints (centers of the cells) as vertices of graph, with adjacent waypoints connected. It determines the next waypoint/vertex to visit based on vertices already visited and obstacles identified (see figure 3 ###reference_###). The agent encounters an obstacle while at dotted grey vertex and chooses to visit dotted red vertex among other adjacent vertices. The strategy selects the lowest-numbered vertex that has not yet been visited and is adjacent to the set of vertices already visited. A shortest path is determined to the selected vertex. The strategy is used repeatedly until all the vertices connected to initial one are visited atleast once. The agent rejects to choose the vertices where obstacle has been detected in the past.\nWe use the following graph theoretic nomenclature for our problem:\nGraph with waypoints of SFC as vertices; Adjacent vertices are connected by the edge; No waypoint is assumed to be blocked by an obstacle in . is a dual graph of SFC decomposition.\nSet of vertices where obstacles have been detected\nSet of visited vertices\nSet of vertices in adjacent to vertices in but not present in\nThe strategy is presented as a pseudocode in Algorithm 1 ###reference_###. While at waypoint , the agent knows given the SFC, from the visited nodes and from the detected obstacles till that time. If all the waypoints adjacent to in are blocked, no waypoint remains to be visited and the search can be terminated (line ). On the other hand, the strategy suggests minimum numbered waypoint , and the shortest path to is found using Dijkstra\u2019s algorithm (any shortest path finding algorithm can be used). refers to an array of waypoints starting with and terminating at . The agent checks if is blocked while at the pen-ultimate waypoint (line ). If is found to be blocked, it is added to and the strategy starts again at step (line to ). Otherwise, is reached and added to (line and ). The algorithm terminates when no waypoint adjacent to the visited waypoints remain.\nAn agent starting at waypoint and following the proposed strategy will visit all the waypoints connected to .\nWe prove the lemma by contradiction. Assume a waypoint is not blocked by an obstacle and is connected to , but the agent terminated the exploration without visiting . This can happen only if,\nBut,\nSince, is connected to and not visited. Therefore, a contradiction.\n\u220e\n###figure_4### The agent will explore the area spanned by waypoints given the sensing radius is enough to cover the entire cell (Eq. III ###reference_###). In scenarios involving confined areas, the iteration of SFC might result in blocking access to certain areas that are otherwise reachable. These instances can be detected by comparing the sum of visited and blocked waypoints to the highest numbered (say ) waypoint achieved at the end of the search.\nIf,\nThis could mean some waypoints have been blocked due to confined spaces which can be tackled by increasing the iteration of SFC (see figure 4 ###reference_###), or some waypoints are obstructed on all sides and any change in search will not matter."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV Discussion & Results",
|
| 27 |
+
"text": "In this paper, the obstacles bigger or comparable to the sensing radius of the agent are considered dense. While the obstacles smaller than the sensing radius and scattered across the area are considered sparse. Here it is assumed that the agent can modify the sensing radius, as is the case in [14 ###reference_b14###, 13 ###reference_b13###] with regards to aerial vehicles. The properties of the altered route recommended by the proposed strategy are outlined below-\nThis strategy is effective in environments with both dense and sparse obstacles, unlike conventional approaches such as the Lawnmower\u2019s path, which struggles in areas with sparse obstacles.\nPath adjustments are made within the set of waypoints that have already been visited, making this approach suitable for online use. Moreover, in parts free of obstacles, the strategy follows original SFC pattern, enabling an optimal search in these regions.\nHigher iterations of SFCs are preferred for sparse obstacles while lower iterations are preferred for obstacles with larger size. Higher iterations offer a more agile path and lesser occlusion. But, they offer a longer path and may not be desired for dense.\nThe path generated using the strategy can be adapted for different parts of the region to be explored. Some parts may require more rigorous search than others. In such cases, higher iteration may give the agent more time and focus the search on smaller cells.\nThe properties mentioned above are illustrated next through different scenarios, as implemented in the code."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "IV-A Results",
|
| 33 |
+
"text": "The presented strategy was implemented as a Python library. iGraph [22 ###reference_b22###] was used for graph operations (version ). Hilbert curve library [21 ###reference_b21###] was used for plotting the Hilbert curve (version ). The implementation and a guide on how to execute the examples can be found at [23 ###reference_b23###].\nDense Obstacles : The strategy was implemented for a situation with normal obstacles. Figure 5 ###reference_### shows two obstacles blocking the SFC. The modified path is also shown.\nHere, while at ,\nthe agent searches for obstacle and goes to . Next,\nagent searches for obstacle and goes to . When no obstacle is found, the agent follows the SFC pattern. This happens till , when obstacle is detected at . Now,\nSo, the agent tries to reach but encounters obstacle while at . Next, it visits from using shortest path. Now,\nThe agent decides to go but detects an obstacle while at . After which the agent visits ,\nHence, the agent goes to and is added to , which is reached finally. Now,\nNext, The agent decides to go and detects obstacle while at . Finally, the agent goes to , and so on following the SFC until next obstacle is detected. The search is finally terminated at waypoint .\n###figure_5### \n###figure_6### Sparse Obstacles : To test the strategy for Sparse obstacles, Iteration Hilbert curve was blocked by obstacles at unknown random locations. The strategy was able to generate alternate path while following SFC pattern in areas without obstacles as seen in figure 6 ###reference_###. In scenarios with higher percentage of blocked obstacles resulted in smaller region connected to the starting waypoint.\n###figure_7### Non-Uniform Coverage : In certain cases the user may want to search some parts more rigorously. This is demonstrated by using the proposed strategy in a square region with quadrants spanned by Hilbert curve with different iterations (see figure 7 ###reference_###). Higher iteration (4) is used in a top-left quadrant while lower iteration (1 and 2) in lower quadrants. The proposed strategy suggests exhaustive detour for each of the quadrant when each of them is traversed sequentially.\nSometimes, the agent is unable to get to the last waypoint of SFC. This is seen in the right-hand lower quadrant of fig 7 ###reference_###. The agent ends his journey at point . Hence, it is uncertain whether a direct path to the first waypoint of the subsequent SFC is available, as it could be obstructed. Here, agent can be moved to a waypoint () on the shared edge closest to . Eventually, moving to the closest waypoint on the next SFC and start using the proposed strategy. The approach will guarantee the coverage of waypoints numbered lower than connected to are visited.\n###figure_8###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "5",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Conclusion & Future Work",
|
| 39 |
+
"text": "This paper presented a strategy for online obstacle evasion on a Space-Filling curve. The strategy suggests visiting minimum numbered adjacent waypoint that has not been visited. The search was proven to be exhaustive and the strategy was further elaborated through examples with obstacles of different nature. The strategy does not necessarily guarantee a optimal path, but can act as a baseline for further optimization. In addition to addressing the gaps in the strategy, there are other intriguing directions for future work. An enhanced form of the strategy wherein the agent has the knowledge of state (blocked/not-blocked) of all the immediate up-down-left-right nodes could lead to shorter routes as compared with in situ detection as used here. Extension to 3D space and environment with dynamic obstacles could be also be exciting future directions."
|
| 40 |
+
}
|
| 41 |
+
],
|
| 42 |
+
"appendix": [],
|
| 43 |
+
"tables": {},
|
| 44 |
+
"image_paths": {
|
| 45 |
+
"1": {
|
| 46 |
+
"figure_path": "2209.01426v2_figure_1.png",
|
| 47 |
+
"caption": "Figure 1: Hilbert curve as mapped from I\ud835\udc3cIitalic_I to a tessellated square",
|
| 48 |
+
"url": "http://arxiv.org/html/2209.01426v2/extracted/5490425/mapping.png"
|
| 49 |
+
},
|
| 50 |
+
"2": {
|
| 51 |
+
"figure_path": "2209.01426v2_figure_2.png",
|
| 52 |
+
"caption": "Figure 2: Hilbert curve iterations 1 to 4; Colored circles represent specific translation + rotation rules for creating nt\u2062hsuperscript\ud835\udc5b\ud835\udc61\u210en^{th}italic_n start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT iteration from n\u22121t\u2062h\ud835\udc5bsuperscript1\ud835\udc61\u210en-1^{th}italic_n - 1 start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT iteration",
|
| 53 |
+
"url": "http://arxiv.org/html/2209.01426v2/extracted/5490425/iter.png"
|
| 54 |
+
},
|
| 55 |
+
"3": {
|
| 56 |
+
"figure_path": "2209.01426v2_figure_3.png",
|
| 57 |
+
"caption": "Figure 3: Grey dots: Visited vertices of the graph, Yellow lines: Edges connecting the vertices, Red dots: Vertices adjacent to already visited vertices",
|
| 58 |
+
"url": "http://arxiv.org/html/2209.01426v2/extracted/5490425/explain.png"
|
| 59 |
+
},
|
| 60 |
+
"4": {
|
| 61 |
+
"figure_path": "2209.01426v2_figure_4.png",
|
| 62 |
+
"caption": "Figure 4: Region blocked by tight space reachable through the use of higher iteration (4)",
|
| 63 |
+
"url": "http://arxiv.org/html/2209.01426v2/extracted/5490425/inv.jpg"
|
| 64 |
+
},
|
| 65 |
+
"5(a)": {
|
| 66 |
+
"figure_path": "2209.01426v2_figure_5(a).png",
|
| 67 |
+
"caption": "Figure 5: Hilbert curve with normal sized obstacles; Modified path, Brown represent waypoints with detected obstacle",
|
| 68 |
+
"url": "http://arxiv.org/html/2209.01426v2/extracted/5490425/ex1a.jpg"
|
| 69 |
+
},
|
| 70 |
+
"5(b)": {
|
| 71 |
+
"figure_path": "2209.01426v2_figure_5(b).png",
|
| 72 |
+
"caption": "Figure 5: Hilbert curve with normal sized obstacles; Modified path, Brown represent waypoints with detected obstacle",
|
| 73 |
+
"url": "http://arxiv.org/html/2209.01426v2/extracted/5490425/exp1b.jpg"
|
| 74 |
+
},
|
| 75 |
+
"6": {
|
| 76 |
+
"figure_path": "2209.01426v2_figure_6.png",
|
| 77 |
+
"caption": "Figure 6: Obstacle evasion scenarios with sparse obstacles (shown in black); 10%, 20% and 30% of total waypoints blocked",
|
| 78 |
+
"url": "http://arxiv.org/html/2209.01426v2/extracted/5490425/spa.jpg"
|
| 79 |
+
},
|
| 80 |
+
"7": {
|
| 81 |
+
"figure_path": "2209.01426v2_figure_7.png",
|
| 82 |
+
"caption": "Figure 7: Purple star represents starting position, inverted triangle represents terminal position, route between the quadrants is represented by sky-blue arrows",
|
| 83 |
+
"url": "http://arxiv.org/html/2209.01426v2/extracted/5490425/non_no_obs.png"
|
| 84 |
+
}
|
| 85 |
+
},
|
| 86 |
+
"validation": true,
|
| 87 |
+
"references": [],
|
| 88 |
+
"url": "http://arxiv.org/html/2209.01426v2"
|
| 89 |
+
}
|
20240323/2212.09963v2.json
ADDED
|
@@ -0,0 +1,256 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Use of mobile phone sensing data to estimate residence and occupation times in patches: Human mobility restrictions and COVID-19",
|
| 3 |
+
"abstract": "Modeling critical social phenomena, such as economic trends and infectious disease transmission, often requires capturing the dynamics of population mobility. This study focuses on modeling and inferring urban population mobility using geospatial data obtained from inhabitants\u2019 GPS reports. We estimate mobility patterns and the time fractions that inhabitants spend in each of the areas of interest, such as zip codes and census geographical areas, utilizing a Brownian bridge model. The derived information can be applied to diverse models involving human activities and dynamics. However, our primary objective is to address the practical gap in epidemic modeling based on patches, which may require more detailed mobility information than conventional origin-destination matrices provide. In practice, such information is often reduced to simpler structures or mobility models with few parameters that synthesize human mobility. We illustrate the model and method using data from the city of Hermosillo, Sonora, Mexico, in 2020, specifically between the two local waves of the 2019 coronavirus disease pandemic. We obtain estimations for different time periods to assess their stability and sensitivity, and compare these findings against known mobility restrictions and social events. Furthermore, we integrate the estimated residence and occupation parameters into a multi-patch compartmental epidemiological model to assess the impact on the epidemic evolution of changes in urban mobility.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "It is increasingly evident that we are witnessing a continual rise in global and regional connectivity. Understanding mobility patterns is crucial to developing more realistic mathematical models to comprehend various phenomena such as economic trends, patterns of violence, information dissemination, and the spread of infectious diseases [25 ###reference_b25###, 41 ###reference_b41###, 49 ###reference_b49###]. To practically utilize these more realistic mathematical models, we need to estimate their complex high-dimensional parameters. However, this estimation problem is often not addressed, as many works proposing new models tend to study them theoretically or through simulations. Alternatively, they may reduce complexity by employing specific mobility models, thereby reducing the dimensionality of the problem. For example, some dynamic human systems use mobility parameters described by gravity or radiation models, producing more specific mobility parameters like origin-destination mobility matrices.\nIn this study, we leverage big data from smartphone geolocation information and the Brownian Bridge model to propose estimating mobility parameters beyond the conventional origin-destination matrices used in network models. Specifically, we focus on what we term the Residence and Occupation Matrix (ROM). Unlike the origin-destination matrix, which is typically used for commuter and migrant models, the ROM does not describe the number of travels between any two zones, but instead depicts the expected fraction of time that an individual residing in a specific zone spends in all other zones. This matrix allows us to characterize mobility patterns based on the most significant zones for individuals, according to the time they spend in them during various activities such as work, school, and commuting.\nAlthough human mobility and COVID transmission have been extensively researched in the field of GIScience, valuable insights for future research are provided by Zhang et al. [58 ###reference_b58###]. These insights include 1. Fostering multidisciplinary collaboration, 2. enhancing mathematical models for analysis and prediction of disease transmission, simulation, and prediction, and 3. diversifying sources of mobility data to ensure accuracy and suitability.\nThis study addresses the first point as part of a multidisciplinary effort involving industry, the healthcare sector, epidemiologists, mathematicians, statisticians, and computer scientists. The complexity of data gathering, model development, statistical inference, and big data challenges necessitated this multidisciplinary approach. The outcomes of this research are directly related to [2 ###reference_b2###] and [1 ###reference_b1###]. Regarding the second point, the works cited address this aspect, since we propose the epidemic model and theoretically studied it in [2 ###reference_b2###], and divide the estimation problem into two parts: first, the estimation of mobility parameters (presented here), and second, the statistical inference of remaining epidemic parameters [1 ###reference_b1###]. The inference of the epidemic model using this strategy is based on smartphone information and COVID-19 incidence data in Hermosillo.\nRegarding the last point, we acknowledge that while other studies utilize similar geolocation (GPS) data, they often do not fully exploit its potential. Typically, these studies focus on estimating mobility information related to residence location and origin-destination matrices, relying solely on the frequency of GPS reports in each zone. In contrast, our utilization of the Brownian model enables us to estimate comprehensive spatio-temporal information from discrete-time geolocation reports, offering significant opportunities for modeling various human activities in a metropolitan area in detail. Moreover, our approach facilitates the incorporation of estimates derived from real-time or near-real-time data acquisition, holding considerable promise for enhancing the application of mathematical models. This has the potential to advance our understanding and response to social, economic, or health-related events, such as the epidemic model outlined in this study.\nThe structure of the paper is as follows: Section 2 offers background information on mobility studies and epidemic models that incorporate mobility components. Section 3 outlines the demographic and GPS data collection from mobile phones. In Section 4, we detail the methods for residence selection and introduce the Brownian Bridge model, which we employ to estimate occupation times. Section 5 presents the results of the residence selection and the estimated ROMs for the designated periods. Here, we compare the ROMs to assess their stability and sensitivity in light of epidemic data, mitigation strategies, and celebrations. Additionally, we examine the sensitivity of the epidemic model under realistic mobility scenarios based on estimated mobility parameters. Finally, Section 6 provides a discussion of the results and outlines avenues for future research."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Background",
|
| 15 |
+
"text": "Detailed human mobility records of a particular regional population can be obtained from mobile phone records [47 ###reference_b47###] and such information can be used to predict future user\u2019s locations, validate human mobility models [36 ###reference_b36###], study the role of human mobility and estimate the main characteristics of social networks [55 ###reference_b55###], among others.\nIndeed, the desire to understand and predict how the human population moves in the real world has been one of the main objectives of data collected by mobile phone service providers [39 ###reference_b39###].\nThe diversity of applications of the study of human mobility cannot be over-emphasized. For instance, in the area of transportation, the study of human mobility can be used as a guide for the optimization of road networks and public transport systems, leading to efficient urban planning and engineering. Some other applications include humanitarian relief, health services, and identification of social graphs [39 ###reference_b39###, 30 ###reference_b30###, 44 ###reference_b44###, 56 ###reference_b56###, 43 ###reference_b43###, 35 ###reference_b35###, 50 ###reference_b50###].\nThe heterogeneous human interactions can significantly impact the dynamics we study. In epidemiology, traditional mass-action compartmental models have proven their usefulness for different infectious agents in modeling infection dynamics at the regional level. However, for several diseases, it is crucial to consider the structure of sub-populations (defined by zones or patches) and their more specific contacts patterns that are usually induced by the spatial configuration and mobility.\nTo introduce heterogeneous human contacts, epidemics in networks model for individual connections (or reaction-diffusion models on meta-population networks) have played an essential role in journals with a physics focus [13 ###reference_b13###, 11 ###reference_b11###, 54 ###reference_b54###, 46 ###reference_b46###]. Nevertheless, the model inference, fit or selection, uses highly computing-intensive numeric methods such as Markov chain Monte Carlo (MCMC) or particle filtering based on Monte Carlo simulations. Unfortunately, the scaling of these methods makes them prohibitive when considering medium- to larger-size populations. This paper focuses on estimating the information used in the class of patchy epidemic models [8 ###reference_b8###, 2 ###reference_b2###] that fall into an intermediate complexity between mass-action and networks models (at individual level). These models introduce population heterogeneities at the level of sub-populations and can include the within- and between-contact patterns induced by mobility.\nSince this work emphasizes the applications for patchy epidemic models, from now on we refer as \u201cpatches\u201d to the geographical zones that partition the area of interest.\nSeveral epidemic models have proposed the introduction of regional mobility patterns [5 ###reference_b5###, 9 ###reference_b9###, 8 ###reference_b8###, 15 ###reference_b15###, 57 ###reference_b57###, 31 ###reference_b31###] for human-to-human infectious diseases [34 ###reference_b34###, 7 ###reference_b7###, 26 ###reference_b26###] or vector-borne diseases [29 ###reference_b29###, 14 ###reference_b14###, 3 ###reference_b3###, 32 ###reference_b32###]. These can incorporate mobility with an emphasis on a regional scale, which is critical in the increasing number of cases at the beginning of the outbreak [33 ###reference_b33###]. However, we want to evaluate mobility on the scale of the metropolitan area since most of these movements occur in the context of the daily existence of 55% of the world population living in cities [60 ###reference_b60###, 51 ###reference_b51###].\nNumerous authors have sought to identify and model mobility patterns using a wide array of data sources, ranging from traditional sources such as surveys and census data to more contemporary sources like GPS data, social media feeds, and online platforms [6 ###reference_b6###]. While official data and social networks may offer insights into mobility at a macroscopic level, covering large regions such as states or countries, they often lack the granularity needed to understand mobility within a metropolitan area in detail, both spatially and temporally.\nFor decades, scientists have studied animal movement using tracking devices affixed to wildlife, which generate detailed datasets capturing movement patterns within specific populations. More recently, geolocated smartphones have emerged as powerful tool for human tracking devices for humans, owing to their widespread adoption among a significant portion of the population. This ubiquity facilitates the collection of vast geolocation datasets, which provide detailed insights into human mobility and behavior with unprecedented spatial and temporal resolution. Consequently, this enables the study of human travel patterns at finer scales in both time and space [37 ###reference_b37###, 41 ###reference_b41###, 45 ###reference_b45###, 17 ###reference_b17###]. Mobile phones represent a valuable source of information for studying various aspect of human behavior, environmental monitoring, transportation, social services, businesses [16 ###reference_b16###], and hold untapped potential as tools for public health.\nWhile the potential utility of data sourced from mobile phones has been recognized for several years, analyses typically center around estimating population sizes in specific areas, identifying city activities, hotspots, and characterizing mobility patterns through contact networks [10 ###reference_b10###] or metrics [23 ###reference_b23###]. These insights are often derived directly from reported GPS records and aggregated in time intervals.\nIn the context of epidemiological models, researchers have focused on understanding certain types of movement that exhibit regular patterns, such as commuting between two regions [48 ###reference_b48###]. Typically, these movements are estimated based on the frequency of GPS reports in each area. In more theoretical approaches, authors often assume specific models for human mobility from one region to another, such as adjacency and gravity models [25 ###reference_b25###, 6 ###reference_b6###]. Additionally, the impact of mobility, according to these models, is illustrated under various mobility scenarios [49 ###reference_b49###].\nFor these these epidemic models, the mobility estimation reduces to inferring the percentage of individuals for each pair of residence and destination regions. However, epidemiological models, such as [8 ###reference_b8###, 2 ###reference_b2###], consider mobility not only as commuting between two patches, but also as the different temporal contacts that individuals have with individuals on routes to various destinations during the day. These other epidemiological models then use the information on the origin (residence) and the proportion of time individuals spend in each region (including their own). We propose estimating this Residence and Occupation Matrix (ROM) with a model that takes into consideration the individuals\u2019 sequence of GPS reports (\u2018pings\u2019) and the stochastic variation from the path between consecutive locations.\nThe objective is to determine the residential patch of each individual and statistically estimate the average time spent by inhabitants in each of the patches of interest within the city, such as zip codes and other geographical areas. This involves generating an non-negative matrix that delineates the proportion of time that individuals residing in a specific area are expected to spend in each patch. Our proposed method is based on the Brownian bridge process commonly employed in ecology to analyze animal movement. By leveraging this method, we can estimate an individual\u2019s location at any given time between successive pings. We applied this approach to data from Hermosillo, Mexico, during the COVID-19 pandemic. Additionally, we integrated the estimated residence-mobility matrix into a multi-patch compartmental SEIR model to evaluate the epidemic impact of mobility changes resulting from governmental interventions."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "Data",
|
| 21 |
+
"text": ""
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "Geographic and demographic information",
|
| 27 |
+
"text": "The city of Hermosillo is in the state of Sonora in the northwest of Mexico and has an area of 169.55 km. It is the capital of the state, and it is located 280 km south of the border with the United States and about 110 km from the coast of the Gulf of California. More precisely, it is located at the parallel of the north latitude and the meridian of west longitude from Greenwich (Figure 0A ###reference_sf1###). The Population and Housing Census111Census 2020, https://www.inegi.org.mx/programas/ccpv/2020/ ###reference_20/###, conducted by the National Institute of Statistics and Geography (INEGI) in 2020, reports a population of 2,944,840 in the state of Sonora, and the municipality with the largest population in this state corresponds to the city of Hermosillo, with a total population of 936,263. The local time zone in Hermosillo corresponds to UTC-07:00 throughout the year, and in contrast to all other states in Mexico (except Quintana Roo), Sonora do not switch to the national summertime that in 2020 was from April 5 to October 25.\n###figure_1### ###figure_2### The city and its metropolitan area are divided into 502 basic census geographical units (AGEB), as shown in Figure 1 ###reference_###. These areas represent the smallest demographic units, and information such as population and basic economics is publicly available for them. AGEBs, or aggregations thereof, present a natural option for defining unit areas (patches) for urban mobility analysis."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "Mobile phone sensing data",
|
| 33 |
+
"text": "Mobile phone tracking is a process to identify the location of a mobile phone based on various technologies and methods. These include multilateration (triangulation) of radio signals and using a global navigation satellite system (GNNS or GPS). The first technology uses the constant roaming radio signals that a mobile phone emits and may be picked up by three or more stationary cell towers enabling one to approximate the device location through triangulation. In contrast, the Global Positioning System (GPS, GNNS) allows for a high-precision localization (within a few centimeters to meters) of signals that operate independently of any telephonic or internet reception.\nA mobile phone \u2018ping\u2019 is the process of sending a signal to a particular mobile phone and this responding with the request data to determine its location by utilizing its GPS location capabilities.\nThe data that we use in this research corresponds to GPS pings collected by mobile service providers when users access specific (but undisclosed) apps for which they have granted permission to access their location information.\nOn September 14, 2020, in the city of Hermosillo, Sonora, Mexico, the University of Sonora entered into an agreement granting access to GPS data from mobile phone reports covering the city area. This agreement, subject to confidentiality conditions, enables us to disseminate scientific findings derived from this data.\nThe raw data was weekly stored in a daily-partitioned table in BigQuery (a Google Cloud service). Through the API service and SQL queries, we could download specific sub-data bases as well as the whole database, in order to do the data wrangling prior to the analysis."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.2.1",
|
| 37 |
+
"parent_section_id": "3.2",
|
| 38 |
+
"section_name": "3.2.1 The variables of mobility data",
|
| 39 |
+
"text": "The mobility data used in this work consists of 80,582,452 records, which contain the variables in Table 1 ###reference_### that we describe afterwards.\nVariable\nDescription\nmobile phone\u2019s ID (unique to each device)\nPing date and time\nPing\u2019s Latitude\nPing\u2019s Longitude\nIt is a unique alphanumeric ID associated with each device (mobile phone). The database contains 306,963 such IDs. Each time a device utilizes a specific app with GPS localization capabilities enabled, it generates a ping, resulting in the addition of a new record to the database. The number of pings per device ranges from 1 to 18,297. However, 96.41% of these devices have at most 1,000 records in the complete database.\nThis field consists of a sequence of characters in the format YYYY-MM-DD hh:mm:ss UTC, indicating the date and time when a device generated a ping. All timestamps fall within the timeframe from 00:00:00 on September 18, 2020, to 23:59:59 on December 13, 2020, in Universal Time Coordinated (UTC). However, for estimating urban mobility, conducting various sanity checks, and determining the residence patch for individuals in the city of Hermosillo (as outlined in Section 4.1 ###reference_###), we convert the timestamp variable to the local time zone. Therefore, the local dates and times of each mobility data observation range from 15:00 on September 17, 2020, to 15:00 on December 13, 2020.\nThese two columns in the database contain numerical values indicating latitude and longitude coordinates of the device when a ping was made. To facilitate distance calculations between GPS data points, we transform these coordinate data into Cartesian coordinates using the UTM (Mercator) projection. This transformation was performed using the geopandas library in Python. The latitude values in the database range from 28.9753 to 29.1960, while the longitude values range from -111.1002 to -110.8457."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2.2",
|
| 43 |
+
"parent_section_id": "3.2",
|
| 44 |
+
"section_name": "3.2.2 Spatial and temporal characteristics of the data",
|
| 45 |
+
"text": "Periodic patterns resulting from daily routines such as commuting, school, rush hours, meal times, and nighttime activities significantly impact the mobility of urban populations and the frequency of ping registrations. These patterns can vary based on factors like the day of the week (weekday or weekend) and the availability of local transport systems [17 ###reference_b17###]. In Figure 2 ###reference_###, we present the complete time series of the number of pings categorized by both the day of the week and the time of day. This dataset spans 87 consecutive days from 2020-09-17 to 2020-12-13, and is divided in the plot into 13 weeks, starting from 00:00, 2020-09-14 (Monday) and ending on 24:00, 2020-12-13 (Sunday). Each week is visually differentiated by using a different color. The first day (2020-09-17) is represented by the \u2019short\u2019 red line within the Thursday window. The conclusion of the time series is indicated by the \u2019short\u2019 pink line, which ends at 17:00 hrs within the Sunday window.\nThis visualization reveals the temporal regularity, aligning with the typical activity schedules for city of Hermosillo residents.\nFor example, activity levels tend to decrease at dawn and rise during hours associated with work and leisure. Notably, distinct patterns emerge for weekends.\n###figure_3### Figure 3 ###reference_### shows the bivariate normal kernel density estimate for traveled distance between consecutive pings and travel time (in a logarithmic scale for proper visualization) using all pings recorded during the study period from a sample of 10,000 id\u2019s. This graph was generated using MASS package in R and selecting kde2d\u2019s default parameters. The vertical dotted lines mark the corresponding time scale in seconds. Similarly, the horizontal dotted lines mark the corresponding distances traveled in meters. From the figure, we observe that most consecutive pings are at intervals between 2 and 30 minutes. These are divided into those where people move only few meters (likely walking or in a workspace) and those where they move between 30 and 500 meters (likely in a vehicle). At the top right, a third, less important group spends between 2 and 40 hours between pings but moves 500 to 10,000 meters.\n###figure_4### To carry out censuses and surveys, it is necessary to define, in the geographical area, the study areas.\nSuch units are called Geostatistical Areas and in Mexico these are: State (AGEE), Municipal (AGEM) and Basic (AGEB). The latter is the smallest and fundamental geographical area and may be urban or rural. Urban AGEBs delimit a part or the total of a locality of 2500 inhabitants or more. In larger towns and cities, these AGEBs generally groups 25 to 50 blocks. In the case of the city of Hermosillo, there are 587 urban AGEBs whose population sizes are illustrated in Figure 0B ###reference_sf2###."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "Methods",
|
| 51 |
+
"text": "One of the first papers that discussed urban analysis using data originated from mobile phones was [42 ###reference_b42###] and was followed by works that highlighted possibilities of real-time visualization and monitoring of displacements in cities. In general terms, now we can distinguish models of human mobility aimed at reproducing individual mobility patterns or general population flows [6 ###reference_b6###].\nThe residency-mobility matrix that we estimate can be viewed as a general population flow that describe the mobility patterns for individuals who inhabit each AGEB, but it is based on individual mobility modelling as we first estimate individual paths between consecutive pings. In the next sections we describe the model and the inference we propose."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.1",
|
| 55 |
+
"parent_section_id": "4",
|
| 56 |
+
"section_name": "Residence selection",
|
| 57 |
+
"text": "Since we are interested in describing mobility at the AGEB level, it is necessary to identify the AGEB to which each ping\u2019s location belongs. For this, it is essential to transform the latitude and longitude data to the UTM coordinate system and identify the AGEB to which it corresponds. We do this using the AGEBs official polygon information.\nNow, as we also want to describe the mobility patterns for all individuals that live in each AGEB, and this information is not available for each ID, it is necessary to assign each ID to an AGEB as a residence using the very same database. That is, based on the ping information of each ID we determine the AGEB that its resides. For this, we use the an heuristic that combines criteria of frequency, timeframes, and AGEBs\u2019 population size. This selection process is described in Algorithm 1 ###reference_###.\nThe primary idea behind this algorithm is to determine the AGEB of residence for an ID based on where it demonstrates the highest frequency of pings between 22:00 and 06:00 hours. If multiple AGEBs fulfill this criterion, the residence is randomly assigned by sampling from these candidates, with the sampling weighted by their respective population sizes. This heuristic falls under the category of unsupervised methods and represents a variation of the \"grid frequency method\" [59 ###reference_b59###, 53 ###reference_b53###]. The main distinctions lie in the fact that the grids correspond to irregularly shaped patches (AGEBs) that partition the area of interest, and it doesn\u2019t necessitate assigning a specific geolocation, such as a patch\u2019s centroid, but only the patch ID of residence."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "The Brownian bridge model",
|
| 63 |
+
"text": "Suppose that represents the position of the j-th resident from r-th patch at time . Then , as is the number of considered patches, and where is the resident population size in patch .\nLet , \u2026, be the information from the GPS reports for the -th individual that resides in patch . The term corresponds to the total of pings collected during the observation time, and denotes the position of the -th resident from the -th patch at its -th reporting time . If we set the observation period starting with the first ping, then the individual observation period is with .\nNow, let denote the unobserved position of the j-th resident from r-th patch at time which undertakes a random walk from positions to .\nIf a Brownian motion is assumed between these two positions, then is distributed according to a bivariate normal distribution at time , with mean vector and covariance matrix given by\nwhere is the identity matrix and is the Brownian motion variance related to the mobility of the -th resident from -th patch. Thus, the Brownian bridge process has the property of being normal along a straight line joining the points and . In addition, the maximum variability is obtained at the middle of the trajectory and the variance equals 0 when or . This assumption is adequate when the location reports are precise, but in many applications it is convenient to introduce the uncertainty related to measurement errors. For this, we consider that the starting and ending locations are bivariate normal and , where is the variance of the location error.\nTherefore the expected time spent at location , during , is\nwhere is the probability density function of a bivariate normal distribution with mean vector and covariance variance , and\nUsing the bridges, we then can estimate the density of the expected occupation time for each individual, as a mixture of the densities (1 ###reference_###). That is\nFor computing the overall fraction of time spent in a region (occupation time in ), by individuals from patch , we average the individual fractions of times in considering all its patch residents as\nFor this work, we are interested in estimating for each pair of patches and and these component constitute the residence and occupation matrix (ROM). If we have that individuals can travel only to any of the considered patches, the sum of the rows for the matrix will add up to one."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.3",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Likelihood inference",
|
| 69 |
+
"text": "First, we assume that the variance of the location error can be known from the device specifications, while the individual variance of the mobility of the j-th resident from -th patch is unknown. We estimate using the method proposed in [19 ###reference_b19###] (also, see [38 ###reference_b38###]). Therefore, we consider the as odd numbers and estimate the mean based on the independent Brownian bridges for the non-overlapping time intervals\nwhile regarding the in-between observation times as independent observations from these bridges to estimate . Thus, is a bivariate normal random variable with mean and covariance matrix given by\nwhere\n and .\nTherefore, the likelihood function of the parameter can be written as\nwhere again denotes the probability density function of a bivariate normal distribution.\nThe maximum likelihood estimate for is the value that maximizes in (4 ###reference_###). Thus for any individual originating from patch , the estimated probability density function at position is given by\nThen we can estimate the expected occupation time in from resident of patch , based on (3 ###reference_###), as\nIf the information of the device\u2019s location error is unknown, the Markov property in the previous model is not longer true, but we can opt for the BMME extension proposed by [40 ###reference_b40###] based on the joint distribution of where\n, , are iid Gaussian variables with mean 0 and variance that are also independent of the Brownian motion .\nSince is a multivariate gaussian vector, then conditional ditribution\nwhere\nThen the density of the expected occupation time for each individual (2 ###reference_###) under BMME, corresponds to\nSimilarly to the previous model, we can compute the maximum likelihood estimates for and using the likelihood function based on the joint density of the increments\nwhere is the symmetric and banded matrix with elements\nwhere .\nThe downside of introducing as a parameter extra to estimate is outweighed by the advantage of using all the data at the same time to estimate all parameters and the fact that this inference is not computationally more expensive to implement. Once the maximum likelihood estimates are calculated we obtain the expected occupation time in region similarity to (5 ###reference_###) and using (6 ###reference_###),\nIn general, the considered patches can have very irregular shapes and for computing the integral part in the previous expression we\nrequire to resort to numerical methods. For the application we opt for the uniform sampling Monte Carlo integration."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Results",
|
| 75 |
+
"text": ""
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.1",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "Data filtering and residence selection",
|
| 81 |
+
"text": "In this section, we describe the periods of time we consider for the ROMs estimations, the filtering process to select the devices to include in each estimation, and the resulting residence selection.\nWe have chosen three periods, each divided into two parts, to estimate the mobility patterns. The objective is to compare the resulting estimates matrices pairwise for each period. Table 2 ###reference_### presents these time frames.\nThe reason for selecting these periods was to study the moderate to possible important changes in mobility patters in relation to the implemented control measures for COVID-19. The Secretary of Health of the State of Sonora adopted the \u201cepidemic traffic light\u201d and according to its colors (green, yellow, orange and red) the local government implemented specific measures. Some were recommendations, such as remote working and protective measures, but some other were more strictly imposed, such as limitations on gatherings, capacity restrictions for businesses and remote schooling. Figure 4 ###reference_### shows the number of daily COVID-19 cases for the city of Hermosillo between the second half of 2020 and the beginning of 2021. The existing values for the epidemic traffic light are indicated on the top of this graph.\n###figure_5### To visualize the general epidemic pattern Figure 4 ###reference_### includes the moving average of cases. It also indicates the period of time for which we have the GPS information as the interval between the two vertical lines. Since the vaccination campaign started on 2021, all controls during this period were in relation to hygiene protocols, travel restriction, social distancing, and stay-at-home orders.\nThe contemplated periods and its parts are also represented in Figure 4 ###reference_### as the grey segments above the curve. It becomes evident that all selected periods primarily fall within the yellow alert phase for COVID-19 and are positioned between the fist and second epidemic waves in Hermosillo. Additionally, the second parts of periods 1 and 3 exhibit no indication of a significant rise in confirmed COVID-19 cases. In fact, the moving average shows a slight downward trend between 2020-09-21 and 2020-11-15. Therefore, with regard to confirmed cases, the second part of each period does not foreshadow the onset of the second wave of COVID-19 cases.\nRegarding the urban mobility, we postulate that the second part of each period could coincide with an increase in the movement of people within the city, as they encompass the dates of Halloween (November 31st) as well as the traditional Mexican holiday D\u00eda de Muertos (November 1st and 2nd). Both festivities are fervently celebrated in cemeteries, markets, and streets. This surge in mobility may be associated to the delayed rise in COVID-19 cases that marked the beginning of the second wave."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.1.1",
|
| 85 |
+
"parent_section_id": "5.1",
|
| 86 |
+
"section_name": "5.1.1 Data selection",
|
| 87 |
+
"text": "The original mobility data in the database consists of 80,582,452 pings that register the timestamp of 306,963 devices (IDs). However, to proceed with the estimation, we filter them to ensure a minimum of pings per device in each part. Table 3 ###reference_### contains the number of IDs that in any of the two parts, of the corresponding period, report at least 5, 11, 15, or 21 weekly pings. The table also shows its magnitude as the percentage of the number of IDs with at least one ping in any of both parts.\nTo perform the Brownian bridges estimation, we filter the mobile phone database for each period by dropping the IDs with less than 10 pings in a week. That is, for each period and each part, we keep IDs with at least 11 weekly pings in any of the two parts. Then the number of selected IDs for each period-part are: 108,252 (), 123,878 (), 108,252 (), 120,156 (), 102,091 (), and 103,184 ().\nThe population in Hermosillo city is about 936,263 inhabitants, then the number of IDs selected to represent the mobility is above 10.9% for any period-part."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.2",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Estimation of residence and occupation matrices for Hermosillo",
|
| 93 |
+
"text": "Based on the selected IDs and their pings information during each period-part (Table 2 ###reference_###), we obtained the maximum likelihood estimates for , (, ) and .\nWith this estimates, in turn, we are able to approximate the ROM for each period-part using (8 ###reference_###) as\nwhere , are the respective residence and visiting patches ().\nAs we mention above, the integral is numerically solved using uniform sampling Monte Carlo integration. For this we randomly sampled 2-dimensional points within the area of Hermosillo and its metropolitan area, identified the patch (AGEB) to which each belongs and evaluate the density on them.\nThe result of this analysis renders 6 different matrices, each of dimension close to , as we consider different patches (AGEBs) and few of them did not keep any IDs after filtering. The complexity of this level of information makes it challenging to convey solely through figures. A map may only capture a fraction of this data, such as a single row of the ROM, depicting the expected proportion of time that individuals residing in an AGEB would spend in all other AGEBs.\nGiven the inherent complexity of representing the entire residence-mobility matrix in relation to spatial layout, we utilize alternative metrics to effectively identify certain attributes of mobility patterns relative to the AGEBs of residency. The first approach involves assessing the distance between the ROMs computed from both parts of each period. The second alternative entails presenting two maps that illustrate differences in two specific aspects of mobility: namely, the absence of mobility and the proportion of time spent by all individuals (who leave their AGEB at any given time point) in other AGEBs"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.2.1",
|
| 97 |
+
"parent_section_id": "5.2",
|
| 98 |
+
"section_name": "5.2.1 Matrix distances",
|
| 99 |
+
"text": "As our objective of assessing differences between the estimated matrices by period, we compute three of the most relevant matrix distances for the part and matrices in each period. These distance functions are the Manhattan (Mh), Euclidean (E) and Minkowski (Mk) distances, which can be computed for any two matrices with the same dimensions. Let and be two real matrices with the same dimensions. Then Minkowski distances is defined as follows:\nThe Manhattan and Euclidean distances correspond, respectively, to the Minkowski distance with and . We can be observed that these functions are symmetric and null only when all the entries of the matrices coincide.\nThe ROMS are estimated using the filtered data in each period-part and the number of residents may differ from one to the other. To produce matrices with the same dimensions in both parts of each period, and obtain their distance, we selected the AGEBs with at least one resident in both parts. Then for each period, we compare matrices with number of rows: 469, 479 and 463.\nTable 4 ###reference_### presents the distances for the estimated ROMs derived from the two parts in each period.\nWe observe that for , distances are consistently higher. Although its parts are defined when Hermosillo presented a low number of cases (see Figure 4 ###reference_###), occurs several weeks after the first COVID-19 wave, while precedes it and encompasses two significant festivities: Halloween (October 31st) and D\u00eda de Muertos (November 1st and 2nd). The preparations for these celebrations and the ensuing social gatherings (in homes, markets, and cemeteries) can be associated with the increase in urban mobility and social interaction that facilitated the subsequent wave.\nIn contrast, the distance between the estimated ROMs for and is closer, not because they are closer in time, but due to the observed increase in data and the eventual shift to the orange alert level, which could once again restrict urban mobility."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.2.2",
|
| 103 |
+
"parent_section_id": "5.2",
|
| 104 |
+
"section_name": "5.2.2 Differences for mobility characteristics",
|
| 105 |
+
"text": "To further study the pattern changes that occurred in Hermosillo, we propose dividing the population into two categories and them estimating the ROM for one of them. These categories are: individuals who never leave their residence patch in each period-time and those who do it.\nWe estimate the proportion of individual who never leave their AGEB as the proportion of IDs, in each period-time, that did not have any GPS report outside their AGEB of residence. This information is stored in 6 vectors, each of length close to 500 (as again, few AGEBs did not report any residents).\nConsidering the AGEBs present in each pair of periods, Figure 4A ###reference_sf1### illustrates the differences in these proportions (proportion during proportion during , ). For each of the three periods, it is evident that during its second parts, this proportion is slightly larger for most AGEBs, indicating that there is no evidence that more people tend to leave their own AGEB.\n###figure_6### ###figure_7### Now, focusing on the data from residents who left their AGEB, Figure 4B ###reference_sf2### depicts the discrepancies in the daily fraction of time they spend in their own AGEB (the proportion of time in their own AGEB during the proportion of time in their own AGEB during , ). This is akin to examining the difference between the diagonals of the square ROMs, which are estimated considering only those individuals with at least one ping outside of their residence.\nFrom Figures 4A ###reference_sf1### and 4B ###reference_sf2### we observe that while the number of indivuals leaving their AGEBs do not increase, those who do leave, spend less time within their residence. This trend is particularly evident during and may potentially be linked to the preparation for festivities, which is also reflected in the larger distance measures between the ROMs for and (Table 4 ###reference_###).\nThe selection of periods and parts was carried out with the objective of testing the consistency and sensitivity of the produced estimates for mobility. As anticipated, for and , which are consecutive and fall under the yellow epidemic traffic light, the estimates yielded minor differences.\nFor and , their second parts are separated by gaps of three and four weeks, respectively, from their first parts. As mentioned earlier, the mobility changes observed during can be significantly attributed to preparations for and festivities surrounding the holidays during . However, includes only the two days of the second festivity (November 1st and 2nd) and the local government decided to increase mobility restrictions to the orange light level on November 9th, resulting in smaller distances for the matrices estimated for (see Table 4 ###reference_###).\nRegarding the AGEBs and their population, it is notable that AGEBs with the greatest change in mobility for and are identified as having higher population density and a higher index of marginalization than the rest of the city. These areas correspond to the northern zone of the city (north of Blvd. Progreso) and the southwest zone (the area delimited by Paseo del R\u00edo Sonora and Blvd. Las Quintas), which is a residential area with a history of vehicular traffic problems due to the high population density [20 ###reference_b20###].\nAn important aspect to note is that both parts of and occurred during the yellow epidemic traffic light, and throughout these weeks, the number of reported cases remained relatively stable (see Figure 4 ###reference_###). However, from just these two pieces of information (implemented mobility restrictions and recent observed number of cases), we cannot predict the next COVID-19 wave in the city. From this, it becomes evident the practical relevance of harnessing mobility information for predicting and mitigating epidemic outbreaks in a given region. Having this information can help in future planning, as the risk of experiencing a second wave could have been associated with the observed longer visiting times to other AGEBs during (Figure 5 ###reference_###).\nThe mobility patterns depicted in the maps of Figure 5 ###reference_### represent just a partial visualization of a complex information system that incorporates population density (Figure 0B ###reference_sf2###) and the visiting time patterns of individuals. All this information is included in the epidemic model in the next section and is necessary for justifiable analyses of the effects of mobility on infection cases."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.3",
|
| 109 |
+
"parent_section_id": "5",
|
| 110 |
+
"section_name": "The multi-patch epidemic model with mobility and residency",
|
| 111 |
+
"text": "Human mobility plays major role in the geography of health and epidemiology, as it is a capital factor in the reappearance and persistence of diseases [27 ###reference_b27###]. Particularly, the explosive urban growth and mobility of people within and between urban regions are factors that affect the geographical dispersion of infectious diseases [4 ###reference_b4###]. An approach to incorporate the spatial dynamics of mobility into infectious diseases is the use of mathematical models with differential equations that incorporate residency and mobility to describe human mobility within and between geographic regions in a synthesized manner [52 ###reference_b52###, 18 ###reference_b18###, 22 ###reference_b22###, 2 ###reference_b2###]. To our knowledge, articles following this approach to modeling disease dynamics focus on specifying particular structures for populaton residence, mobility or occupation. They implement human mobility models that are often not estimated (such as gravity and radiation models), or theoretically or through simulations these works demonstrate the mobility effects on various characteristics of the studied disease dynamics, such as disease transmission, endemicity, and epidemic peak and duration. However, from an applied standpoint, an important challenge lies in estimating the occupation time by residence. In fact, the problem of estimating ROMs in urban areas poses a theoretical and computational challenge due to the heterogeneity of individual human mobility within cities. Simply put, knowing how much time individuals residing in patch of a city spend in patch of the same city requires knowledge of which city residents live in patch , as well as their continuous locations over time. The available information systems that come closest to providing continuous location data over time for general population are mobile phone detection systems. However, most of applications using this data estimate only aggregated data such as origen-destination for commuters or migrants, and not the fraction of time that we can expect residents spend in each patch. That is, to our knowledge, no formal methodologies have been published regarding the use of mobile phone detection data to estimate the expected time individuals visit each patch given his/her residence location."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "5.3.1",
|
| 115 |
+
"parent_section_id": "5.3",
|
| 116 |
+
"section_name": "5.3.1 Epidemic model",
|
| 117 |
+
"text": "In this research, we employ the presented methodology to estimate the mobility and occupation parameters in a multi-patch epidemic model and assess their effect on it. To achieve this, we utilize the parameters estimated during the different periods as outlined in Table 2 ###reference_###.\nThe following meta-population multi-patched SEIRS compartmental model has been theoretically studied in [2 ###reference_b2###] and we refer the reader to this article to have the intuition behind its formulation and derivation.\nThe considered epidemic model evolves in different patches with inhabitants, and it is formally posed as:\nfor and where and are the index for any two patches. The term corresponds to the parameter product , .\nTable 5 ###reference_### contains the definition of the parameters used in the model.The parameters and describe the mobility and occupation by residence, and these can be estimated using the methodology used to estimate the ROM.\n###table_1### To appraise the role of mobility on the infection dynamics in Hermosillo, we first estimate the parameters and using the GPS information during each of the periods and parts described in Table 2 ###reference_###. Then we obtain the solution for the system (5 ###reference_###) for , using fixed epidemic parameters , , , , and , , and each of the estimated mobility parameters and . Finally, we compare the effects of changes in the the resulting mobility parameter by obtaining the difference between the two generated infection counts for each period.\nFor instance, to measure the epidemic effect of mobility modification that occurred during , we estimate the mobility parameters for and , and compute the difference between the infection counts from (5 ###reference_###) and under each residence and mobility matrix."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "5.3.2",
|
| 121 |
+
"parent_section_id": "5.3",
|
| 122 |
+
"section_name": "5.3.2 Estimates for mobility parameters",
|
| 123 |
+
"text": "The mobility parameters categorize the population into two groups: individuals who never leave their residential patch and those who do. The parameter describes the distribution for the proportion of individuals who leave their patch, while refers to the occupation times of individuals in this same category.\nWe estimate the parameter in a similar way to how we produced the vectors depicted in Figure 4A ###reference_sf1###. For each period-part and , we estimate as the proportion of IDs inhabiting AGEB and registering at least one ping outside AGEB . Then, the vectors represented in Figure 4A ###reference_sf1### correspond to the estimation of , as the percentages are obtained relative to the total.\nThe matrix corresponds to the ROM for individuals who leave their patch. We estimate it as the ROM but considering only IDs that where used in the computation of . In other words, we filter the data to retain only IDs that registered at least one ping outside their AGEB of residence and use (9 ###reference_###). Then is the ROM conditioned to individuals leaving their AGEB.\nThe epidemic model does not consider a ROM for individuals not leaving their patch, as it certain that they spend all day withing their own patch."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "5.3.3",
|
| 127 |
+
"parent_section_id": "5.3",
|
| 128 |
+
"section_name": "5.3.3 Mobility effects on the outbreaks",
|
| 129 |
+
"text": "In order to examine how mobility influences the spread of disease across all the AGEBs in Hermosillo, we set the epidemic parameters and initial conditions for all simulations. The initial number of infected individuals was determined based on observed COVID-19 infection data in Hermosillo, with one initial case each in four AGEBs identified by their respective IDs: 2956, 3367, 5734, and 6200. Thus, for the numerical simulation, the initial values were set as follows:\nThe demographic parameters used in the simulation were obtained from existing literature. The crude death rate for the state of Sonora, Mexico, according to [24 ###reference_b24###], is 6% per 1,000 persons per year. As we are modeling infection dynamics on a daily basis, we used a constant natural death rate of for each individual in any AGEB. Population sizes, , for each AGEB were obtained from the 2019 Mexico census conducted by the National Institute of Statistics and Geography (INEGI).\nFor this analysis, we assumed an almost constant population size. Utilizing the natural crude death rate we calculated the recruitment rate of susceptibles for each AGEB as . Following [28 ###reference_b28###], who posited that the contact rate of COVID-19 falls within the range , we set as the daily rate of infection in each AGEB. Latent and recovery rates, as well as the rate at which recovered individuals become susceptible, were set as days, days, and days, respectively, based on diverse literature on COVID-19.\nFigure 6 ###reference_### illustrates the disparities in infection counts derived from estimated mobility parameters across different periods (\u201c under \u201d \u201c under \u201d, ), shedding light on the impact of mobility on disease dynamics both at the level of individual AGEBs and globally. For instance, Figure 6 ###reference_### reveals a swift progression of the disease in most AGEBs during compared to , particularly within the initial period of approximately days. This suggests that for the majority of AGEBs, their epidemic peak occurred earlier under the estimated mobility parameters in . As the epidemic curve evolves under the estimations derived from , curves in Figure 6 ###reference_### change signs.\nFigure 6 ###reference_### showcases the disparity in the sum of infection counts for each AGEB between and over time. Due to varying population sizes among AGEBs, their contributions to the total number of cases vary. Notably, a few curves deviating from the described pattern do not significantly impact the overall trend at the global level. Similar patterns are discernible in Figures 6 ###reference_### and 6 ###reference_###, as well as Figures 6 ###reference_### and 6 ###reference_###, illustrating disparities in individual AGEB and global infection counts across and , respectively.\nFrom Table 4 ###reference_###, we have that had the most distant estimated ROM matrices. Nonetheless, Figure 6 ###reference_### reveals that the global evolution of diseases from the estimated ROM in and exhibits similar magnitudes (though both are more pronounced than in the case of ). Indeed, during , the region was under yellow, transitioning to orange epidemic traffic light restrictions by the government, which could realistically reduce infections, as depicted in Figure 6 ###reference_###.\nThe similarity observed in the global differences for the epidemic curves for and (Figures 6 ###reference_### and 6 ###reference_###) cannot be directly explained by the distance functions. These functions were obtained for the estimated ROMs and not the estimates of the decoupled mobility information and . From Figure 5 ###reference_### we compare the changes in both parts for and . We observe that in both cases individuals who traveled outside their AGEB spent a large proportion of their time in other AGEBs during their second parts. However, for individuals who traveled spent more time outside, and for the proportion of individuals leaving is smaller. We have to stress that these figures alone cannot fully explain the similarity in the simulated outbreaks under and , as they represent only a fraction of the categorized mobility information and do not account for other important factors such as AGEB population sizes.\nWhat we can conclude is that epidemic evolution is deeply linked to population and its mobility patterns. It also becomes evident that simple-summarized mobility models that are introduced into epidemic models may fall short of producing acceptable epidemic forecasts.\nWe cannot underestimate the effect of mobility in the epidemic dynamics, but it is linked to complex high-dimensional models that brings numerous challenges when directly addressing its statistical inference or model fitting. To confront this, we divide the estimation problem into two parts: first, the estimation of mobility parameters (that we have presented here), and second, the statistical inference of remaining epidemic parameters. Results for the inference of the epidemic model (10 ###reference_###) using this strategy, and based on COVID-19 incidence data in Hermosillo, are presented in [1 ###reference_b1###].\n###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13###"
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "6",
|
| 133 |
+
"parent_section_id": null,
|
| 134 |
+
"section_name": "Conclusion",
|
| 135 |
+
"text": "In this article, we propose a method to incorporate the geolocalization data to estimate not only an origin-destination matrix but more detailed mobility information on the daily visiting time by individuals according to their region of residency. The method is rooted in Brownian-type stochastic models that allow us to obtain a distribution of individual displacement trajectories, and with them, we can answer multiple mobility questions.\nFor the considered epidemic model application, we estimate the ROM at the AGEB level that describes the visiting patterns and times for all the individuals that live in each AGEB. This matrix, along with the alpha vector, corresponding to the probabilities that any individual move beyond their AGEB, is an input for a multi-patch SEIR compartmental model applied to the Hermosillo\u2019s urban area.\nFrom the results, it is evident the real impact of population mobility on the epidemiological evolution, as we estimate the residence-mobility matrices utilizing actual data from the city of Hermosillo.\nConsequently, through the epidemic model, we have shown that however small the inter and intra-local residence human mobility may be, the net effect of the same can lead to a noticeable, exigent and momentous local and global infection levels.\nAs in many models incorporating mobility information, the estimation of residence-mobility parameters can be the first step for fitting or estimating all involved parameters. In the case of epidemic models, we are interested in estimating the parameters and solving questions, such as forecasting and effective intervention identification.\nThe one-step estimation of all parameters is likely not feasible, as very complex models prompt non-identifiability problems. Then the produced estimates presented in this research allow us to reduce the model complexity and permit statistical estimation of the infectious agent parameters. This estimation is the subject of ongoing research.\nA challenge that persists in the use of mobile phone data for mobility analysis is the assumption of representativeness of the sample. However, with the increasing use of smartphones, we consider that these databases are becoming more representative samples of the total population. Until then, we can incorporate alternative data to validate or bias-correct some of the estimated parameters.\nFurthermore, the selection of the residential AGEB can be enhanced not only by incorporating additional information from diverse sources but also by accounting for the uncertainty associated with the resulting allocation. For more precise results, it is desirable to incorporate this uncertainty into the resulting assertions on the mobility by AGEB of residence.\nDespite the improvements we can make, the mobility and its estimation, using mobile phone data,\nalready allows generating important inputs to expand the scope of many mathematical models that describe critical social phenomena such as the economy, violence, transmission of information or infectious diseases. With the proposed methodology, other estimates can also be made that consider, for example, mobility by time ranges, such as nighttime and weekends, or the description of mobility by subgroups, such as mobility by age groups and sex in the area of interest."
|
| 136 |
+
}
|
| 137 |
+
],
|
| 138 |
+
"appendix": [],
|
| 139 |
+
"tables": {
|
| 140 |
+
"1": {
|
| 141 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.1.1\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.2.1.1.1\" style=\"width:99.6pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.1.1.1.1\">Variable</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.2.1.1.2\" style=\"width:199.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.1.1.2.1\">Description</p>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.1\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.2.2.1.1\" style=\"width:99.6pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_align_top\" id=\"S3.T1.2.2.1.1.1\">id_adv</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S3.T1.2.2.1.2\" style=\"width:199.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.2.1.2.1\">mobile phone\u2019s ID (unique to each device)</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.3.2\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.2.3.2.1\" style=\"width:99.6pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_align_top\" id=\"S3.T1.2.3.2.1.1\" style=\"background-color:#F2F2F2;\">timestamp</span></td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.2.3.2.2\" style=\"width:199.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.3.2.2.1\" style=\"background-color:#F2F2F2;\">Ping date and time</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.4.3\" style=\"background-color:#FFFFFF;\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.2.4.3.1\" style=\"width:99.6pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_align_top\" id=\"S3.T1.2.4.3.1.1\" style=\"background-color:#FFFFFF;\">lat</span></td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.2.4.3.2\" style=\"width:199.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.4.3.2.1\" style=\"background-color:#FFFFFF;\">Ping\u2019s Latitude</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.5.4\" style=\"background-color:#F2F2F2;\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.2.5.4.1\" style=\"width:99.6pt;\"><span class=\"ltx_text ltx_font_typewriter ltx_align_top\" id=\"S3.T1.2.5.4.1.1\" style=\"background-color:#F2F2F2;\">lon</span></td>\n<td class=\"ltx_td ltx_align_justify\" id=\"S3.T1.2.5.4.2\" style=\"width:199.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.2.5.4.2.1\" style=\"background-color:#F2F2F2;\">Ping\u2019s Longitude</p>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.3.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S3.T1.4.2\" style=\"font-size:90%;\">Variables in mobile phone sensing dataset.</span></figcaption>\n</figure>",
|
| 142 |
+
"capture": "Table 1: Variables in mobile phone sensing dataset."
|
| 143 |
+
},
|
| 144 |
+
"2": {
|
| 145 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T2.6.7.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.7.1.1.1\">Abreviation</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.6.7.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.7.1.2.1\">Full name</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.6.7.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.7.1.3.1\">Start date</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T2.6.7.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.6.7.1.4.1\">End date</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.1.2\">First period - First Part</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.1.3\">2020-09-21</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T2.1.1.4\">2020-10-04</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.2.2.1\" style=\"padding-bottom:4.0pt;\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.2.2.2\" style=\"padding-bottom:4.0pt;\">First period - Second Part</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.2.2.3\" style=\"padding-bottom:4.0pt;\">2020-10-26</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.2.2.4\" style=\"padding-bottom:4.0pt;\">2020-11-08</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.3.3.2\">Second period - First Part</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.3.3.3\">2020-09-21</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.3.3.4\">2020-10-04</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.4.4.1\" style=\"padding-bottom:4.0pt;\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.4.4.2\" style=\"padding-bottom:4.0pt;\">Second period - Second Part</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.4.4.3\" style=\"padding-bottom:4.0pt;\">2020-11-02</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.4.4.4\" style=\"padding-bottom:4.0pt;\">2020-11-15</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.2\">Third period - First Part</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.3\">2020-09-21</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T2.5.5.4\">2020-10-11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T2.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.6.6.2\">Third period - Second Part</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.6.6.3\">2020-10-12</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T2.6.6.4\">2020-11-01</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.8.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.9.2\" style=\"font-size:90%;\">Periods within which the mobility matrices were estimated.</span></figcaption>\n</figure>",
|
| 146 |
+
"capture": "Table 2: Periods within which the mobility matrices were estimated."
|
| 147 |
+
},
|
| 148 |
+
"3": {
|
| 149 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.2.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.2.1.1.1.1\">Period</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.2.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.2.1.1.2.1\">+5 pings</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.2.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.2.1.1.3.1\">+11 pings</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.2.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.2.1.1.4.1\">+15 pings</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.2.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.2.1.1.5.1\">+21 pings</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.2.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.2.2.1.1\">First</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.2.2.1.2\">217,396 (88.74%)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.2.2.1.3\">153,117 (62.5%)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.2.2.1.4\">85,742 (35%)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T3.2.2.1.5\">44,859 (18.31%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.2.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.2.3.2.1\">Second</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.2.3.2.2\">226,174 (89.05%)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.2.3.2.3\">154,374 (60.78%)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.2.3.2.4\">89,831 (35.37%)</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T3.2.3.2.5\">46,456 (18.29%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.2.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.2.4.3.1\">Third</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.2.4.3.2\">186,448 (88.12%)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.2.4.3.3\">128,915 (60.93%)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.2.4.3.4\">68,485 (32.37%)</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T3.2.4.3.5\">32,416 (15.32%)</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.3.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S5.T3.4.2\" style=\"font-size:90%;\">IDs distribution according to their number of weekly pings per period.</span></figcaption>\n</figure>",
|
| 150 |
+
"capture": "Table 3: IDs distribution according to their number of weekly pings per period."
|
| 151 |
+
},
|
| 152 |
+
"4": {
|
| 153 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T4.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T4.1.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.1.3.1\">Manhattan</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.1.4.1\">Euclidean</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.1.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.1.1.1\">Minkoswki </span>()</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T4.2.2.1\">First period ()</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.2.2\">220.3977</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.2.3\">3.0169</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.2.4\">1.1631</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T4.3.3.1\">Second period ()</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.3.3.2\">196.1594</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.3.3.3\">2.3036</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.3.3.4\">0.8797</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T4.4.4.1\">Third period ()</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.4.4.2\">182.3303</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.4.4.3\">2.0883</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.4.4.4\">0.7590</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T4.6.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S5.T4.7.2\" style=\"font-size:90%;\">Matrices distances for estimated matrices in each period.</span></figcaption>\n</figure>",
|
| 154 |
+
"capture": "Table 4: Matrices distances for estimated matrices in each period."
|
| 155 |
+
},
|
| 156 |
+
"5": {
|
| 157 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T5.18\" style=\"width:433.6pt;height:252.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(16.0pt,-9.3pt) scale(1.07969757227886,1.07969757227886) ;\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T5.18.18\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T5.18.18.19.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S5.T5.18.18.19.1.1\" style=\"padding-bottom:2.15277pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T5.18.18.19.1.2\" style=\"padding-bottom:2.15277pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T5.18.18.19.1.2.1\">Parameters</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T5.18.18.19.1.3\" style=\"padding-bottom:2.15277pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T5.18.18.19.1.3.1\">Description</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.18.18.20.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" colspan=\"2\" id=\"S5.T5.18.18.20.2.1\">Mobility Parameters:</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S5.T5.18.18.20.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.2.2.2\">\n<td class=\"ltx_td\" id=\"S5.T5.2.2.2.3\" style=\"padding-bottom:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.1.1.1.1\" style=\"padding-bottom:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.2.2.2.2\" style=\"padding-bottom:2.0pt;\">The proportion of individuals that leave their residence patch .</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.4.4.4\">\n<td class=\"ltx_td\" id=\"S5.T5.4.4.4.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.4.4.4.2\">The proportion of time that an individual from patch spends</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.5.5.5\">\n<td class=\"ltx_td\" id=\"S5.T5.5.5.5.2\" style=\"padding-bottom:5.0pt;\"></td>\n<td class=\"ltx_td\" id=\"S5.T5.5.5.5.3\" style=\"padding-bottom:5.0pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.5.5.5.1\" style=\"padding-bottom:5.0pt;\">in patch , given that is one individual that leaves its residence patch.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.18.18.21.3\">\n<td class=\"ltx_td ltx_align_left\" colspan=\"2\" id=\"S5.T5.18.18.21.3.1\">Epidemiological Parameters:</td>\n<td class=\"ltx_td\" id=\"S5.T5.18.18.21.3.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.7.7.7\">\n<td class=\"ltx_td\" id=\"S5.T5.7.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.6.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.7.7.7.2\">Rage of recruitment of Susceptible individuals in Patch .</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.9.9.9\">\n<td class=\"ltx_td\" id=\"S5.T5.9.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.8.8.8.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.9.9.9.2\">Transmission rate for contacts occurring in Patch .</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.11.11.11\">\n<td class=\"ltx_td\" id=\"S5.T5.11.11.11.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.10.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.11.11.11.2\">Per capita natural death rate in Patch .</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.13.13.13\">\n<td class=\"ltx_td\" id=\"S5.T5.13.13.13.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.12.12.12.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.13.13.13.2\">Per capita recovery rate of individuals in Patch .</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.14.14.14\">\n<td class=\"ltx_td\" id=\"S5.T5.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.14.14.14.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.14.14.14.3\">Per capita loss of immunity rate.</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.16.16.16\">\n<td class=\"ltx_td\" id=\"S5.T5.16.16.16.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.15.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.16.16.16.2\">Per capita disease induced death rate of Patch .</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.18.18.18\">\n<td class=\"ltx_td ltx_border_bb\" id=\"S5.T5.18.18.18.3\" style=\"padding-bottom:4.30554pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.17.17.17.1\" style=\"padding-bottom:4.30554pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T5.18.18.18.2\" style=\"padding-bottom:4.30554pt;\">Per capita rate at which the exposed individuals in patch becomes infectious.</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T5.20.1.1\" style=\"font-size:90%;\">Table 5</span>: </span><span class=\"ltx_text\" id=\"S5.T5.21.2\" style=\"font-size:90%;\">Description of the parameters in model (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2212.09963v2#S5.E10\" title=\"10 \u2023 5.3.1 Epidemic model \u2023 5.3 The multi-patch epidemic model with mobility and residency \u2023 5 Results \u2023 Use of mobile phone sensing data to estimate residence and occupation times in patches: Human mobility restrictions and COVID-19\"><span class=\"ltx_text ltx_ref_tag\">10</span></a>).</span></figcaption>\n</figure>",
|
| 158 |
+
"capture": "Table 5: Description of the parameters in model (10)."
|
| 159 |
+
}
|
| 160 |
+
},
|
| 161 |
+
"image_paths": {
|
| 162 |
+
"1(a)": {
|
| 163 |
+
"figure_path": "2212.09963v2_figure_1(a).png",
|
| 164 |
+
"caption": "A Hermosillo municipality, Sonora, Mexico [12].\nFigure 1: Hermosillo and its Basic Census Geographical Units (AGEBs).",
|
| 165 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure1A.png"
|
| 166 |
+
},
|
| 167 |
+
"1(b)": {
|
| 168 |
+
"figure_path": "2212.09963v2_figure_1(b).png",
|
| 169 |
+
"caption": "B Hermosillo city and its population by AGEB [21].\nFigure 1: Hermosillo and its Basic Census Geographical Units (AGEBs).",
|
| 170 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure1B.png"
|
| 171 |
+
},
|
| 172 |
+
"2": {
|
| 173 |
+
"figure_path": "2212.09963v2_figure_2.png",
|
| 174 |
+
"caption": "Figure 2: Number of pings by weekday and hour.",
|
| 175 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure2.png"
|
| 176 |
+
},
|
| 177 |
+
"3": {
|
| 178 |
+
"figure_path": "2212.09963v2_figure_3.png",
|
| 179 |
+
"caption": "Figure 3: Kernel density of distance and consecutive pings.",
|
| 180 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure3.jpeg"
|
| 181 |
+
},
|
| 182 |
+
"4": {
|
| 183 |
+
"figure_path": "2212.09963v2_figure_4.png",
|
| 184 |
+
"caption": "Figure 4: Daily confirmed cases of COVID-19 in Hermosillo, Sonora in the time frame of study. The points corresponds to the daily observations, the solid black to the smoothed epidemic curve. The top colored bar indicates the local epidemic traffic light, and the grey indicate the periods and parts in Table 2.",
|
| 185 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure4b.png"
|
| 186 |
+
},
|
| 187 |
+
"5(a)": {
|
| 188 |
+
"figure_path": "2212.09963v2_figure_5(a).png",
|
| 189 |
+
"caption": "A Differences for the proportion of individual that do not leave their AGEB of residency.\nFigure 5: Differences for the proportion of individual that do not leave their AGEBs and times spent within it, for those who leave, between the two parts in each period.",
|
| 190 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure5A.png"
|
| 191 |
+
},
|
| 192 |
+
"5(b)": {
|
| 193 |
+
"figure_path": "2212.09963v2_figure_5(b).png",
|
| 194 |
+
"caption": "B Differences for fraction of time spent in own AGEB of residency for individuals who leave.\nFigure 5: Differences for the proportion of individual that do not leave their AGEBs and times spent within it, for those who leave, between the two parts in each period.",
|
| 195 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure5B.png"
|
| 196 |
+
},
|
| 197 |
+
"6(a)": {
|
| 198 |
+
"figure_path": "2212.09963v2_figure_6(a).png",
|
| 199 |
+
"caption": "\\thesubsubfigure P\u20621A\ud835\udc43subscript1\ud835\udc34P1_{A}italic_P 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and P\u20621B\ud835\udc43subscript1\ud835\udc35P1_{B}italic_P 1 start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT.\nFigure 6: Differences in individual AGEBS (left) and global (right) infection curves and proportions for the first and second parts of each period.",
|
| 200 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure6A.png"
|
| 201 |
+
},
|
| 202 |
+
"6(b)": {
|
| 203 |
+
"figure_path": "2212.09963v2_figure_6(b).png",
|
| 204 |
+
"caption": "\\thesubsubfigure P\u20621A\ud835\udc43subscript1\ud835\udc34P1_{A}italic_P 1 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and P\u20621B\ud835\udc43subscript1\ud835\udc35P1_{B}italic_P 1 start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT.\nFigure 6: Differences in individual AGEBS (left) and global (right) infection curves and proportions for the first and second parts of each period.",
|
| 205 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure6B.png"
|
| 206 |
+
},
|
| 207 |
+
"6(c)": {
|
| 208 |
+
"figure_path": "2212.09963v2_figure_6(c).png",
|
| 209 |
+
"caption": "\\thesubsubfigure P\u20622A\ud835\udc43subscript2\ud835\udc34P2_{A}italic_P 2 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and P\u20622B\ud835\udc43subscript2\ud835\udc35P2_{B}italic_P 2 start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT.\nFigure 6: Differences in individual AGEBS (left) and global (right) infection curves and proportions for the first and second parts of each period.",
|
| 210 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure6C.png"
|
| 211 |
+
},
|
| 212 |
+
"6(d)": {
|
| 213 |
+
"figure_path": "2212.09963v2_figure_6(d).png",
|
| 214 |
+
"caption": "\\thesubsubfigure P\u20622A\ud835\udc43subscript2\ud835\udc34P2_{A}italic_P 2 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and P\u20622B\ud835\udc43subscript2\ud835\udc35P2_{B}italic_P 2 start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT.\nFigure 6: Differences in individual AGEBS (left) and global (right) infection curves and proportions for the first and second parts of each period.",
|
| 215 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure6D.png"
|
| 216 |
+
},
|
| 217 |
+
"6(e)": {
|
| 218 |
+
"figure_path": "2212.09963v2_figure_6(e).png",
|
| 219 |
+
"caption": "\\thesubsubfigure P\u20623A\ud835\udc43subscript3\ud835\udc34P3_{A}italic_P 3 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and P\u20623B\ud835\udc43subscript3\ud835\udc35P3_{B}italic_P 3 start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT.\nFigure 6: Differences in individual AGEBS (left) and global (right) infection curves and proportions for the first and second parts of each period.",
|
| 220 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure6E.png"
|
| 221 |
+
},
|
| 222 |
+
"6(f)": {
|
| 223 |
+
"figure_path": "2212.09963v2_figure_6(f).png",
|
| 224 |
+
"caption": "\\thesubsubfigure P\u20623A\ud835\udc43subscript3\ud835\udc34P3_{A}italic_P 3 start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT and P\u20623B\ud835\udc43subscript3\ud835\udc35P3_{B}italic_P 3 start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT.\nFigure 6: Differences in individual AGEBS (left) and global (right) infection curves and proportions for the first and second parts of each period.",
|
| 225 |
+
"url": "http://arxiv.org/html/2212.09963v2/extracted/5454049/Figure6F.png"
|
| 226 |
+
}
|
| 227 |
+
},
|
| 228 |
+
"validation": true,
|
| 229 |
+
"references": [
|
| 230 |
+
{
|
| 231 |
+
"1": {
|
| 232 |
+
"title": "https://datos.covid-19.conacyt.mx/.",
|
| 233 |
+
"author": "CONACYT, Covid 19 M\u00e9xico.",
|
| 234 |
+
"venue": "Accessed: 2023-01-20.",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"2": {
|
| 240 |
+
"title": "https://www.implanhermosillo.gob.mx/wp-content/uploads/2018/02/E6.pdf.",
|
| 241 |
+
"author": "IMPLAN, Intituto municipal de planeaci\u00f3n urbana de hermosillo.",
|
| 242 |
+
"venue": "Accessed: 2023-01-20.",
|
| 243 |
+
"url": null
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"3": {
|
| 248 |
+
"title": "https://knoema.com/atlas/Mexico/Sonora/Crude-Death-Rate, 2021.",
|
| 249 |
+
"author": "Knoema, World data atlas, mexico, sonora mortality.",
|
| 250 |
+
"venue": null,
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
}
|
| 254 |
+
],
|
| 255 |
+
"url": "http://arxiv.org/html/2212.09963v2"
|
| 256 |
+
}
|
20240323/2301.06627v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2301.10956v4.json
ADDED
|
@@ -0,0 +1,676 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Graph Neural Networks can Recover the Hidden Features Solely from the Graph Structure",
|
| 3 |
+
"abstract": "Graph Neural Networks (GNNs) are popular models for graph learning problems. GNNs show strong empirical performance in many practical tasks. However, the theoretical properties have not been completely elucidated. In this paper, we investigate whether GNNs can exploit the graph structure from the perspective of the expressive power of GNNs. In our analysis, we consider graph generation processes that are controlled by hidden (or latent) node features, which contain all information about the graph structure. A typical example of this framework is kNN graphs constructed from the hidden features. In our main results, we show that GNNs can recover the hidden node features from the input graph alone, even when all node features, including the hidden features themselves and any indirect hints, are unavailable. GNNs can further use the recovered node features for downstream tasks. These results show that GNNs can fully exploit the graph structure by themselves, and in effect, GNNs can use both the hidden and explicit node features for downstream tasks. In the experiments, we confirm the validity of our results by showing that GNNs can accurately recover the hidden features using a GNN architecture built based on our theoretical analysis.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "###figure_1### Graph Neural Networks (GNNs) [15 ###reference_b15###, 45 ###reference_b45###] are popular machine learning models for processing graph data. GNNs take a graph with node features as input and output embeddings of nodes. At each node, GNNs send the node features to neighboring nodes, aggregate the received features, and output the new node features [14 ###reference_b14###]. In this way, GNNs produce valuable node embeddings that take neighboring nodes into account. GNNs show strong empirical performance in machine learning and data mining tasks [65 ###reference_b65###, 12 ###reference_b12###, 21 ###reference_b21###, 18 ###reference_b18###].\nRoughly speaking, GNNs smooth out the node features on the input graph by recursively mixing the node features of neighboring nodes, and GNNs thereby transform noisy features into clean ones (Figure 1 ###reference_### (a)). This smoothing effect has been observed empirically [4 ###reference_b4###, 36 ###reference_b36###] and shown theoretically [29 ###reference_b29###, 37 ###reference_b37###]. There are several GNN architectures that are inspired by the smoothing process [28 ###reference_b28###, 36 ###reference_b36###, 60 ###reference_b60###]. It has also been pointed out that stacking too many layers harms the performance of GNNs due to the over-smoothing effect [29 ###reference_b29###, 6 ###reference_b6###], which is caused by too much mixing of the node features.\nIn this perspective, node features are the primary actors in GNNs, and graphs are secondary. If node features are uninformative at all, GNNs should fail to obtain meaningful node embeddings no matter how they mix node features (Figure 1 ###reference_### (b)). This is in contrast to the opposite scenario: Even if the graphs are uninformative at all, if the node features are informative for downstream tasks, GNNs can obtain meaningful node embeddings just by ignoring edges or not mixing node features at all. Therefore, node features are the first requirement of GNNs, and the graph only provides some boost to the quality of node features [36 ###reference_b36###]. It indicates that GNNs cannot utilize the graph information without the aid of good node features.\nThe central research question of this paper is as follows:\nCan GNNs utilize the graph information\nwithout the aid of node features?\nWe positively answer this question through our theoretical analysis. We show that GNNs can recover the hidden node features that control the generation of the graph structure even without the help of informative node features (Figure 1 ###reference_### (c)). The recovered features contain all the information of the graph structure. The recovered node features can be further used for downstream tasks. These results show that GNNs can essentially use both given node features and graph-based features extracted from the graph structure. Our theoretical results provide a different perspective from the existing beliefs [58 ###reference_b58###, 36 ###reference_b36###] based on empirical observations that GNNs only mix and smooth out node features. In the experiments, we show that existing GNN architectures do not necessarily extract the hidden node features well, and special architectures are required to learn the recovery in empirical situations.\nThe contributions of this paper are summarized as follows:\nWe establish the theory of the feature recovery problem by GNNs for the first time. Our analysis provides a new perspective on the expressive power of GNNs.\nWe prove that GNNs can recover the hidden features solely from the graph structure (Theorem 4.4 ###reference_theorem4###). These results show that GNNs have an inherent ability to extract information from the input graph.\nWe validate the theoretical results in the experiments by showing that GNNs can accurately recover the hidden features. We also show that existing GNN architectures are mediocre in this task. These results highlight the importance of inductive biases for GNNs.\nReproducibility: Our code is publicly available at https://github.com/joisino/gnnrecover ###reference_###."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Related Work",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Graph Neural Networks and Its Theory",
|
| 21 |
+
"text": "Graph Neural Networks (GNNs) [15 ###reference_b15###, 45 ###reference_b45###] are now de facto standard models for graph learning problems [27 ###reference_b27###, 55 ###reference_b55###, 65 ###reference_b65###]. There are many applications of GNNs, including bioinformatics [30 ###reference_b30###], physics [7 ###reference_b7###, 38 ###reference_b38###], recommender systems [12 ###reference_b12###, 21 ###reference_b21###], and transportation [59 ###reference_b59###]. There are several formulations of GNNs, including spectral [8 ###reference_b8###], spatial [14 ###reference_b14###], and equivariant [33 ###reference_b33###] ones. We use the message-passing formulation [14 ###reference_b14###] in this paper.\nThe theory of GNNs has been studied extensively in the literature, including generalization GNNs [46 ###reference_b46###, 13 ###reference_b13###, 62 ###reference_b62###] and computational complexity [17 ###reference_b17###, 5 ###reference_b5###, 66 ###reference_b66###, 44 ###reference_b44###]. The most relevant topic to this paper is the expressive power of GNNs, which we will review in the following.\nExpressive Power (or Representation Power) means what kind of functional classes GNNs can realize. Originally, Morris et al. [34 ###reference_b34###] and Xu et al. [61 ###reference_b61###] showed that message-passing GNNs are at most as powerful as the 1-WL test, and they proposed GNNs that are as powerful as the 1-WL and -(set)WL tests. Sato [42 ###reference_b42###, 43 ###reference_b43###] and Loukas [31 ###reference_b31###] also showed that message-passing GNNs are as powerful as a computational model of distributed local algorithms, and they proposed GNNs that are as powerful as port-numbering and randomized local algorithms. Loukas [31 ###reference_b31###] showed that GNNs are Turing-complete under certain conditions (i.e., with unique node ids and infinitely increasing depths). There are various efforts to improve the expressive power of GNNs by non-message-passing architectures [33 ###reference_b33###, 32 ###reference_b32###, 35 ###reference_b35###]. We refer the readers to survey papers [40 ###reference_b40###, 24 ###reference_b24###] for more details on the expressive power of GNNs.\nThe main difference between our analysis and existing ones is that the existing analyses focus on combinatorial characteristics of the expressive power of GNNs, e.g., the WL test, which are not necessarily aligned with the interests of realistic machine learning applications.\nBy contrast, we consider the continuous task of recovering the hidden features from the input graph, which is an important topic in machine learning in its own right [53 ###reference_b53###, 3 ###reference_b3###, 52 ###reference_b52###, 41 ###reference_b41###]. To the best of our knowledge, this is the first paper that reveals the expressive power of GNNs in the context of feature recovery. Furthermore, the existing analysis of expressive power does not take into account the complexity of the models. The existing analyses show that GNNs can solve certain problems, but they may be too complex to be learned by GNNs. By contrast, we show that the feature recovery problem can be solved with low complexity."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Feature Recovery",
|
| 27 |
+
"text": "Estimation of hidden variables that control the generation process of data has been extensively studied in the machine learning literature [53 ###reference_b53###, 52 ###reference_b52###, 26 ###reference_b26###]. These methods are sometimes used for dimensionality reduction, and the estimated features are fed to downstream models. In this paper, we consider the estimation of hidden embeddings from a graph observation [2 ###reference_b2###, 57 ###reference_b57###, 54 ###reference_b54###, 20 ###reference_b20###]. The critical difference between our analysis and the existing ones is that we investigate whether GNNs can represent a recovery algorithm, while the existing works propose general (non-GNN) algorithms that recover features. To the best of our knowledge, we are the first to establish the theory of feature recovery based on GNNs.\nMany empirical works propose feature learning methods for GNNs [17 ###reference_b17###, 56 ###reference_b56###, 64 ###reference_b64###, 22 ###reference_b22###, 39 ###reference_b39###]. The differences between these papers and ours are twofold. First, these methods are not proven to converge to the true features, while we consider a feature learning method that converges to the true features. Second, the existing methods rely heavily on the input node features while we do not assume any input node features. The latter point is important because how GNNs exploit the input graph structure is a central topic in the GNN literature, and sometimes GNNs are shown to NOT benefit from the graph structure [11 ###reference_b11###]. By contrast, our results show that GNNs can extract meaningful information from the input graph from a different perspective than existing work."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Background and Problem Formulation",
|
| 33 |
+
"text": "In this paper, we assume that each node has hidden (or latent) features , and the graph is generated by connecting nodes with similar hidden features. For example, (i) represents the preference of person in social networks, (ii) represents the topic of paper in citation networks, and (iii) represents the geographic location of point in spatial networks.\nThe critical assumption of our problem setting is that the features , such as the true preference of people and the true topic of papers, are not observed, but only the resulting graph is observed.\nSomewhat surprisingly, we will show that GNNs that take the vanilla graph with only simple synthetic node features such as degree features and graph size can consistently estimate the hidden features (Figure 1 ###reference_### (d)).\nIn the following, we describe the assumptions on data and models in detail."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "Assumptions",
|
| 39 |
+
"text": "In this paper, we deal with directed graphs. Directed graphs are general, and undirected graphs can be converted to directed graphs by duplicating every edge in both directions. We assume that there is an arc from to if and only if for a threshold function , i.e., nodes with similar hidden features are connected. It is also assumed that the hidden features are sampled from an unknown distribution in an i.i.d. manner. As we consider the consistency of estimators or the behavior of estimators in a limit of infinite samples (nodes), we assume that a node and its features are generated one by one, and we consider a series of graphs with an increasing number of nodes. Formally, the data generation process and the assumptions are summarized as follows.\nThe domain of the hidden features is a convex compact domain in with smooth boundary .\nFor each , is sampled from in an i.i.d. manner. There is a directed edge from to in if and only if .\nThe density is positive and differentiable with bounded on .\nThere exists a deterministic continuous function on such that converges uniformly to for some with and almost surely.\nis uniformly equicontinuous almost surely, where is the stationary distribution of random walks on .\nNote that these assumptions are common to [20 ###reference_b20###]. It should be noted that the threshold functions can be stochastic and/or dependent on the data as long as Assumption 4 holds. For example, -NN graphs can be realized in this framework by setting to be the distance to the -th nearest neighbor from . We also note that Assumption 4 implies that the degree is the order of . Thus, the degree increases as the number of nodes increases. It ensures the graph is connected with high probability and is consistent with our scenario.\nRemark (One by One Generation). The assumption of adding nodes one at a time may seem tricky. New users are indeed inserted into social networks one by one in some scenarios, but some other graphs do not necessarily follow this process. This assumption is introduced for technical convenience to consider the limit of and to prove the consistency. In practice, the generation process of datasets does not need to follow this assumption. We use a single fixed graph in the experiments. GNNs succeed in recovering the hidden features only if the graph is sufficiently large."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Graph Neural Networks",
|
| 45 |
+
"text": "We consider message-passing GNNs [14 ###reference_b14###] in this paper. Formally, -layer GNNs can be formulated as follows. Let be the explicit (i.e., given, observed) node features, and\nwhere denotes a multiset, and is the set of the neighbors with outgoing edges to node . We call an aggregation function and a update function. Let denote a list of all aggregation and update functions, i.e., specifies a model. Let be the output of the GNN for node and input graph . For notational convenience, , , and denote the number of layers, -th aggregation function, and -th aggregation function of model , respectively.\nTypical applications of GNNs assume that each node has rich explicit features . However, this is not the case in many applications, and only the graph structure is available. For example, when we analyze social networks, demographic features of users may be masked due to privacy concerns. In such a case, synthetic features that can be computed solely from the input graph, such as degree features and the number of nodes, are used as explicit node features [11 ###reference_b11###, 17 ###reference_b17###, 61 ###reference_b61###]. In this paper, we tackle this general and challenging setting to show how GNNs exploit the graph structure. Specifically, we do not assume any external node features but set\nwhere is the degree of node , and is the number of nodes in .\nThe goal of this paper is that GNNs can recover the hidden features even if the node features are as scarce as the simple synthetic features. In words, we show that there exists GNN that uses the explicit node features defined by Eq. (1 ###reference_###)111Precisely, we will add additional random features as Eq. (5 ###reference_###). and outputs . This result is surprising because GNNs have been considered to simply smooth out the input features along the input graph [29 ###reference_b29###, 36 ###reference_b36###]. Our results show that GNNs can imagine new features that are not included in the explicit features from scratch.\nRemark (Expressive Power and Optimization). We note that the goal of this paper is to show the expressive power of GNNs, i.e., the existence of the parameters or the model specification that realizes some function, and how to find them from the data, i.e., optimization, is out of the scope of this paper. The separation of the studies of expressive power and optimization is a convention in the literature [42 ###reference_b42###, 31 ###reference_b31###, 1 ###reference_b1###]. This paper is in line with them. In the experiments, we briefly show the empirical results of optimization."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Why Is Recovery Challenging?",
|
| 51 |
+
"text": "###figure_2### The difficulty lies in the fact that the graph distance on the unweighted graph is not consistent with the distance in the hidden feature space. In other words, two nodes can be far in the hidden feature space even if they are close in the input graph. This fact is illustrated in Figure 2 ###reference_###. Most of the traditional node embedding methods rely on the assumption that close nodes in the graph should be embedded close. However, this assumption does not hold in our setting, and thus these methods fail to recover the hidden features from the vanilla graph. If the edge lengths in the hidden feature space are taken into account, the shortest-path distance on the graph is a consistent estimator for the distance of nodes in the hidden feature space. However, the problem is that the edge length , such as the quantitative intimacy between people in social networks and the distance between the true topics of two papers in citation networks, is not available, and what we observe is only the vanilla graph in many applications. If we cannot rely on the distance structure in the given graph, it seems impossible to estimate the distance structure of the hidden feature space.\nA hop count does not reflect the distance in the feature space because one hop on the graph in a sparse region is longer in the feature space than a hop in a dense region. The problem is, we do not know whether the region around node is dense because we do not know the hidden features . One might think that the density around a node could be estimated e.g., by the degree of the node, but this is not the case. Indeed, as the graph shown in Figure 2 ###reference_### is a -NN graph, the degrees of all nodes are the same, and therefore the degree does not provide any information about the density. In general, the density cannot be estimated from a local structure, as von Luxburg et al. [57 ###reference_b57###] noted that \u201cIt is impossible to estimate the density in an unweighted -NN graph by local quantities alone.\u201d\nHowever, somewhat unexpectedly, it can be shown that the density function can be estimated solely from the unweighted graph [20 ###reference_b20###, 51 ###reference_b51###]. Intuitively, the random walk on the unweighted graph converges to the diffusion process on the true feature space as the number of nodes increases, and we can estimate the density function from it. Once the density is estimated, we can roughly estimate the threshold function as edges in low-density regions are long, and edges in dense regions are short. The scale function represents a typical scale of edges around can be used as a surrogate value for the edge length.\nEven if the scale function can be estimated in principle, it is another story whether GNNs can estimate it. We positively prove this. In the following, we first focus on estimating the threshold function by GNNs, and then, we show that GNNs can recover the hidden features by leveraging the threshold function."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Main Results",
|
| 57 |
+
"text": "We present our main results and their proofs in this section. At a high level, our results are summarized as follows:\nWe show in Section 4.1 ###reference_### that GNNs can estimate the threshold function with the aid of the metric recovery theory of unweighted graphs [20 ###reference_b20###, 2 ###reference_b2###]. We use the tool on the random walk and diffusion process, developed in [20 ###reference_b20###, 51 ###reference_b51###]\nWe show in Section 4.2 ###reference_### that GNNs can recover the hidden features up to rigid transformation with the aid of the theory of multidimensional scaling [50 ###reference_b50###, 49 ###reference_b49###] and random node features [43 ###reference_b43###, 1 ###reference_b1###].\nWe show in Theorems 4.2 ###reference_theorem2### and 4.5 ###reference_theorem5### that the number of the functions to be learned is finite regardless of the number of nodes, which is important for learning and generalization [62 ###reference_b62###]."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.1",
|
| 61 |
+
"parent_section_id": "4",
|
| 62 |
+
"section_name": "Graph Neural Networks can Recover the Threshold Function",
|
| 63 |
+
"text": "First, we show that GNNs can consistently estimate the threshold function . As we mentioned in the previous section, the density and threshold function cannot be estimated solely from the local structure. As we will show (and as is known in the classical context), they can be estimated by a PageRank-like global quantity of the input graph.\nFor any and that satisfy Assumptions 1-5, there exist such that with the explicit node features defined by Eq. (1 ###reference_###),\nwhere the probability is with respect to the draw of samples .\nWe prove this theorem by construction. The key idea is that GNNs can simulate random walks on graphs [9 ###reference_b9###]. Once the stationary distribution of random walks is estimated, we can recover the scale from it [20 ###reference_b20###]. The full proof can be found in Appendix A ###reference_###.\n\u220e\nThis theorem states that GNNs can represent a consistent estimator of .\nHowever, this theorem does not bound the number of layers, and the number of layers may grow infinitely as the number of nodes increases. This means that if the size of the graphs is not bounded, the number of functions to be learned grows infinitely. This is undesirable for learning. The following theorem resolves this issue.\nThere exist such that for any and , there exist such that Theorem 4.1 ###reference_theorem1### holds with\nFrom the proof of Theorem 4.1 ###reference_theorem1###, most of the layers in are used for estimating the stationary distribution, which can be realized by a repetition of the same layer.\n\u220e\nThis theorem shows that the number of functions we need to learn is essentially five. This result indicates that learning the scale function has a good algorithmic alignment [62 ###reference_b62###, Definition 3.4]. Moreover, these functions are the same regardless of the graph size. Therefore, in theory, one can fit these functions using small graphs and apply the resulting model to large graphs as long as the underlying law for the generation process, namely and , is fixed. Note that the order of the logical quantification matters. As are universal and are independent with the generation process, they can be learned using other graphs and can be transferred to other types of graphs. The construction of these layers (i.e., the computation of the stationary distribution) can also be used for introducing indicative biases to GNN architectures."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "Graph Neural Networks can Recover the Hidden Features",
|
| 69 |
+
"text": "As we have estimated the scale function, it seems easy to estimate the distance structure by applying the Bellman-Ford algorithm with edge lengths and to recover the hidden features, but this does not work well.\nThe first obstacle is that there is a freedom of rigid transformation. As rotating and shifting the true hidden features does not change the observed graph, we cannot distinguish hidden features that are transformed by rotation solely from the graph. To absorb the degree of freedom, we introduce the following measure of discrepancy of features.\nWe define the distance between two feature matrices as\nwhere is the centering matrix, is the identity matrix, and is the vector of ones. We say that we recover the hidden features if we obtain features such that for sufficiently small .\nIn other words, the distance is the minimum average distance between two features after rigid transformation. This distance is sometimes referred to as the orthogonal Procrustes distance [23 ###reference_b23###, 47 ###reference_b47###, 49 ###reference_b49###], and can be computed efficiently by SVD [47 ###reference_b47###]. Note that if one further wants to recover the rigid transformation factor, one can recover it in a semi-supervised manner by the Procrustes analysis.\nThe second obstacle is that GNNs cannot distinguish nodes. A naive solution is to include unique node ids in the node features. However, this leads the number of dimensions of node features to infinity as the number of nodes tends to infinity. This is not desirable for learning and generalization of the size of graphs. Our solution is to randomly select a constant number of nodes and assign unique node ids only to the selected nodes. Specifically, let be a constant hyperparameter, and we first select nodes uniformly and randomly and set the input node features as\nwhere is the -th standard basis, and is the vector of zeros. Importantly, this approach does not increase the number of dimensions even if the number of nodes tends to infinity because is a constant with respect to . From a technical point of view, this is a critical difference from existing analyses [31 ###reference_b31###, 43 ###reference_b43###, 1 ###reference_b1###], which assume unique node ids. Our analysis strikes an excellent trade-off between a small complexity (a constant dimension) and a strong expressive power (precise recovery). In addition, adding node ids have been though to be valid only for transductive settings [16 ###reference_b16###, Section 5.1.1], but our analysis is valid for inductive setting as well (see also the experiments).\nWe show that we can accurately estimate the distance structure and the hidden features by setting an appropriate number of the selected nodes.\nFor any and that satisfy Assumptions 1-5, for any , there exist and such that with the explicit node features defined by Eq. (5 ###reference_###),\nwhere is the estimated hidden features by GNN , and is the true hidden features. The probability is with respect to the draw of samples and the draw of a random selection of .\nWe prove this theorem by construction. We estimate the threshold function by Theorem 4.1 ###reference_theorem1### and compute the shortest path distances from each selected node in with the estimated edge lengths. The computation of shortest path distances can be done by GNNs [62 ###reference_b62###]. After this process, each node has the information of the (approximate) distance matrix among the selected nodes, which consists of dimensions. We then run multidimensional scaling in each node independently and recover the coordinates of the selected nodes. Lastly, the selected nodes announce their coordinates, and the non-selected nodes output the coordinates of the closest nodes in . With sufficiently large , the selected nodes form an -covering of with high probability, and therefore, the mismatch of the non-selected nodes is negligibly small. The full proof can be found in Appendix B ###reference_###.\n\u220e\nAs in Theorem 4.1 ###reference_theorem1###, the statement of Theorem 4.4 ###reference_theorem4### does not bound the number of layers. However, as in Theorem 4.2 ###reference_theorem2###, Theorem 4.4 ###reference_theorem4### can also be realized with a fixed number of functions.\nFor any and , there exist , , , , such that Theorem 4.4 ###reference_theorem4### holds with these functions.\nTherefore, the number of functions we need to learn is essentially a constant. This fact indicates that learning the hidden features has a good algorithmic alignment [62 ###reference_b62###, Definition 3.4]. Besides, the components of these functions, i.e., computation of the stationary distribution, shortest-path distances, and multidimensional scaling, are differentiable almost everywhere. Here, we mean by almost everywhere the existence of non-differentiable points due to the min-operator of the shortest-path algorithm. Strictly speaking, this is no more differentiable than the ReLU function is, but can be optimized in an end-to-end manner by backpropagation using auto-differential frameworks such as PyTorch.\n###figure_3### ###figure_4### Remark (GNNs with Input Node Features). Many of graph-related tasks provide node features as input. Theorem 4.4 ###reference_theorem4### shows that GNNs with as explicit node features can recover , where is defined by Eq. (5 ###reference_###). Thus, if we feed to GNNs as explicit node features, GNN can implicitly use both of and .\nThe most straightforward method for node classification is to apply a feed-forward network to each node independently,\nThis approach is fairly strong when is rich [36 ###reference_b36###] but ignores the graph. Our analysis shows that GNNs can classify nodes using both and , i.e.,\nComparison of Eqs. (6 ###reference_###) and (7 ###reference_###) highlights a strength of GNNs compared to feed-forward networks."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Experiments",
|
| 75 |
+
"text": "In the experiments, we validate the theorems by empirically showing that GNNs can recover hidden features solely from the input graph."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.1",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "Recovering Features",
|
| 81 |
+
"text": "Datasets. We use the following synthetic and real datasets.\nis a synthetic dataset with a two-moon shape. We construct a -nearest neighbor graph with , which satisfies Assumption 4. As we know the ground-truth generation process and can generate different graphs with the same law of data generation, we use this dataset for validating the theorems and showing generalization ability in an inductive setting.\nis a real consensus dataset. We use age and the logarithm of capital gain as the hidden features and construct a -nearest neighbor graph, i.e., people with similar ages and incomes become friends.\nSettings. We use two different problem settings.\nsetting uses a single graph. We are given the true hidden features for some training nodes, and estimate the hidden features of other nodes. We use percent of the nodes for training and the rest of the nodes for testing.\nsetting uses multiple graphs. In the training phase, we are given two-moon datasets with to nodes and their true hidden features. In the test phase, we are given a new two-moon dataset with nodes and estimate the hidden features of the test graph. This setting is challenging because (i) we do not know any hidden features of the test graphs, and (ii) models need to generalize to extrapolation in the size of the input graphs.\nMethods. As we prove the theorems by construction and know the configuration of GNNs that recover the hidden features except for the unknown parameters about the ground truth data (i.e., the scale and the constant that depends on and ), we use the model architecture that we constructed in our proof and model the unknown parameters, i.e., scaling factor , using -layer perceptron with hidden neurons that takes as input and output . This model can be regarded as the GNNs with the maximum inductive bias for recovering the hidden features. We fix the number of the selected nodes throughout the experiments.\nBaselines. We use -layer Graph Attention Networks (GATs) [55 ###reference_b55###] and Graph Isomorphism Networks (GINs) [61 ###reference_b61###] as baselines. We feed the same explicit node features as in our method, i.e., Eq. (5 ###reference_###), and use the hidden features as the target of the regression.\nDetails. We optimize all the methods with Adam [25 ###reference_b25###] with a learning rate of for epochs. The loss function is , where is the output of GNNs, is the ground truth hidden embeddings, and train-mask extracts the coordinates of the training nodes.\nResults. Figures 3 ###reference_### and 4 ###reference_### show the results. As the rigid transformation factor cannot be determined, we align the recovered features using the orthogonal Procrustes analysis in the postprocessing. We make the following observations.\nObservation 1. Recovery Succeeded. As the lower left panels show, the proposed method succeeds in recovering the hidden features solely from the input graphs. Notably, not only coarse structures such as connected components are recovered but also details such as the curved moon shape in the two-moon dataset and the striped pattern in the Adult dataset are recovered.\nObservation 2. Existing GNNs are mediocre. As the lower right panels show, GINs and GATs extract some information from the graph structure, e.g., they map nearby nodes to similar embeddings as shown by the node colors, but they fail to recover the hidden features accurately regardless of their strong expressive powers. This is primarily because the input node features contain little information, which makes recovery difficult. These results highligt the importance of inductive biases of GNNs to exploit the hidden features.\nObservation 3. tSNE fails. The upper right panels show that tSNE on the explicit node features failed to extract meaningful structures. These results indicate that the synthetic node features (Eq. (5 ###reference_###)) do not tell anything about the hidden features, and GNNs recover the hidden features solely from the graph structure.\nObservation 4. Recovery Succeeded in the Inductive Setting. Figure 4 ###reference_### shows that the proposed method succeeded in the inductive setting as well. This shows that the ability of recovery can be transferred to other graph sizes as long as the law of data generation is the same."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.2",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "Performance on Downstream Tasks",
|
| 87 |
+
"text": "We confirm that the recovered feature is useful for downstream tasks using popular benchmarks.\nWe use the Planetoid datasets (Cora, CiteSeer, PubMed) [63 ###reference_b63###], Coauthor datasets, and Amazon datasets [48 ###reference_b48###]. First, we discard all the node features in the datasets (e.g., the text information of the citation network). We then feed the vanilla graph to GNNs in the proposed way and recover the hidden node features by GNNs. We fit a logistic regression that estimates the node label from the recovered features . As a baseline, we fit a logistic regression that estimates the node label from the input node feature , i.e., Eq. (5 ###reference_###). We use the standard train/val/test splits of these datasets, i.e., 20 training nodes per class. The accuracy in the test sets is shown in Table 1 ###reference_###.\nThese results show that the recovered features by GNNs are informative for downstream tasks while the input node features are not at all. This indicates that GNNs extract meaningful information solely from the graph structure. We stress that this problem setting where no node features are available is extremely challenging for GNNs. Recall that existing GNNs use the node features (e.g., the text information of the citation network) contained in these datasets, which we intentionally discard and do not use. The results above show that GNNs work well (somewhat unexpectedly) in such a challenging situation."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Conclusion",
|
| 93 |
+
"text": "In this paper, we showed that GNNs can recover the hidden node features, which contain all information about the graph structure, solely from the graph input. These results provide a different perspective from the existing results, which indicate that GNNs simply mix and smooth out the given node features. In the experiments, GNNs accurately recover the hidden features in both transductive and inductive settings.\nAcknowledgements. This work was supported by JSPS KAKENHI GrantNumber 21J22490 and 20H04244."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix 1",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix A Proof of Theorems 4.1 and 4.2",
|
| 101 |
+
"text": "First, we introduce the following lemma.\n[20 ###reference_b20###]\nUnder assumptions 1-5,\nholds almost surely, where is a constant that depends on and , and is the stationary distribution of random walks on .\nWe then prove Theorem 4.1 ###reference_theorem1###.\nWe prove the theorem by construction. Let\nwhere is the random walk matrix of graph . As , the set in the minimum is not empty. As is finite for every , the outer max exists, and therefore exists for every . We set . In the following, we build a GNN whose embeddings of -th layer is\nThe aggregation function of the first layer is\ni.e., computes , which is the sum of probabilities from the incoming edges. The update function of the first layer is\nholds condition (10 ###reference_###) by construction. The aggregation function of the -th layer () is\nand the update function of the -th layer () is\nholds condition (10 ###reference_###) by construction. Lastly, the aggregation function of the -th layer is\nBy Eq. (9 ###reference_###) and (10 ###reference_###), surely. Combining with Eq. (8 ###reference_###) and (15 ###reference_###) yields almost surely.\n\u220e\nThe definitions of the layers, i.e., equations (11 ###reference_###), (12 ###reference_###), (13 ###reference_###), (14 ###reference_###), and (15 ###reference_###), prove Theorem 4.2 ###reference_theorem2###."
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"section_id": "Appendix 2",
|
| 105 |
+
"parent_section_id": null,
|
| 106 |
+
"section_name": "Appendix B Proof of Theorems 4.4 and 4.5",
|
| 107 |
+
"text": "We prove the theorem by construction. Let be an arbitrary covering of . For each point , let be the ball centered at with radius . The number of the selected points in follows the binomial distribution , where\nis positive. Therefore, . Let\nthen\nand\nby the union bound. Therefore,\nIn words, with at least probability , each of contains at least one point. As is an covering of , forms an covering of under , i.e,\nWe set . The first layers are almost the same as the construction in Theorem 4.1 ###reference_theorem1###. The only difference is that as we take as input (Eq. (5 ###reference_###)), we retain this information in the update functions. Therefore, in the -th layer, each node has in the embedding, where, is the estimate of computed by the GNN (Eq. (15 ###reference_###)).\nThe next layers compute the shortest-path distances from each node in using the Bellman-Ford algorithm. Specifically, in the -th layer, the aggregation function is\nwhere min is element-wise minimum, and INF is a sufficiently large constant such as . The update function of the -th layer is\nThe aggregation function of the -th layer is\nand the update function is\nAs the diameter of is at most , the computation of the shortest path distance is complete after iterations. Therefore, is the shortest-path distance from to with the length of edge being .\nThe following layers propagate the distance matrices among . Specifically, the aggregation function of the -th layer is\nThe update function of the -th layer is\nThe aggregation function of the -th layer is\nand the update function of the -th layer is\nThe last update function is defined as follows.\nAs the diameter of is at most , the propagation is complete after iterations. Therefore, is the shortest-path distance from to with the length of edge being . runs the multidimensional scaling. Note that in the -th layer, each node has the distance matrix in its embedding, therefore MDS can be run in each node in a parallel manner. If is in , outputs the coordinate of recovered by MDS. If is not in , outputs the coordinate of the closest selected node, i.e., .\nWe analyze the approximation error of the above processes. Let be the true hidden embeddings of the selected nodes. By Corollary 4.2 of [50 ###reference_b50###], the noise to the distance matrix causes of misalignment of the coordinates. Therefore, if\nholds for some , then\nLet\nIf Eq. (37 ###reference_###) holds, for all ,\nIf is sufficiently large,\nholds from Theorem S.4.5 of [20 ###reference_b20###]. Note that although Theorem S.4.5 of [20 ###reference_b20###] uses the true while we use , from Eq. (9 ###reference_###) and (10 ###reference_###), holds surely, and therefore, the theorem holds because the mismatch diminishes as .\nWe suppose the following event :\nThe probability of this event is at least by Eq. (16 ###reference_###) and (39 ###reference_###). Under this event, for all ,\nholds by Eq. (41 ###reference_###), (36 ###reference_###), (37 ###reference_###), and (38 ###reference_###), and the definition of (Eq. (33 ###reference_###)) and . Under event , for any , there exists such that by Eq. (40 ###reference_###). By applying Eq. (41 ###reference_###) twice,\nThen,\nwhere (a) follows because by Eq. (33 ###reference_###) and by the definition of , (b) follows by the triangle inequality, (c) follows because is an orthogonal matrix, (d) follows by Eq. (46 ###reference_###), and (e) follows by (44 ###reference_###). By Eq. (44 ###reference_###) and (53 ###reference_###), the the distance between the true embedding and the estimate is less than with rigid transformation and translation . As these transformation parameters are suboptimal for Eq. (4 ###reference_###), these distances are overestimated. Therefore, Under event , , which holds with probability at least .\n\u220e\nThe definitions of the layers prove Theorem 4.5 ###reference_theorem5###."
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"section_id": "Appendix 3",
|
| 111 |
+
"parent_section_id": null,
|
| 112 |
+
"section_name": "Appendix C Technical Remarks",
|
| 113 |
+
"text": "Remark (Global Information). The existing analyses of the over-smoothing effect [29 ###reference_b29###, 37 ###reference_b37###] show that GNNs with too many layers fail. Therefore, GNNs cannot have wide receptive fields, and GNNs cannot aggregate global information. By contrast, our analysis shows that GNNs can obtain global information, i.e., . This result provides a new insight into the understanding of GNNs. Note that the assumptions of the existing analyses [29 ###reference_b29###, 37 ###reference_b37###] do not hold for our GNN architectures. Therefore, our results do not contradict with the existing results.\nRemark (Positions as Explicit Node Features are Redundant). In some applications, the graph is constructed from observed features, and are available as the explicit node features [19 ###reference_b19###, 58 ###reference_b58###, 18 ###reference_b18###]. For example, each node represents a position of interest in spatial data, the graph is constructed by nearest neighbors based on geographic positions, and the positions are included in the explicit node features . Our main results show that such position features are asymptotically redundant because GNNs can recover them solely from the graph structure. In practice with finite samples, the position features can be informative, and they can introduce a good inductive bias, though.\nLimitation (High Node Degree). We assume high node degrees in Assumption 4 and in the experiments, i.e., . Note that is required for random graphs to be connected [10 ###reference_b10###], so we cannot reduce node degrees so much from a technical point of view. Having said that, there is indeed room for improvement by a factor of , which can be indeed large when is small. This bound is common with [20 ###reference_b20###], and improving the bound is important future work.\nRemark (Dimensionality of the True Features). We need to specify the number of dimensions of the true features, which are not necessarily available in practice. Specifying higher number of dimensions than the true one is not so problematic, as the lower dimensional features are recovered in the subspace of the entire space. In practice, we can find a good dimensionality by measuring a reconstruction loss in an unsupervised manner. Namely, after we recover the features, we construct a nearest neighbor graph from it. If it does not resemble the input graph, the dimensionality may not be sufficient."
|
| 114 |
+
}
|
| 115 |
+
],
|
| 116 |
+
"tables": {
|
| 117 |
+
"1": {
|
| 118 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1\">Performance on Downstream Tasks.</span> Each value represents accuracy. Higher is Better. The performance of the baseline method shows that does not contain any information for solving the downstream tasks. GNNs take such uninformative node features only as node features. Nevertheless, the recovered feature is highly predictive. This indicates that GNNs create completely new and useful node features by themselves even when the input node features are uninformative. </figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.4.3.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T1.4.3.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.4.3.1.2\">Cora</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.4.3.1.3\">CiteSeer</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.4.3.1.4\">PubMed</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.4.3.1.5\">Coauthor CS</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.4.3.1.6\">Coauthor Physics</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.4.3.1.7\">Computers</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T1.4.3.1.8\">Photo</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.3.1.1\">Baseline \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.3.1.2\">0.122</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.3.1.3\">0.231</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.3.1.4\">0.355</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.3.1.5\">0.066</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.3.1.6\">0.307</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.3.1.7\">0.185</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.3.1.8\">0.207</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T1.4.2.1\">Recovered Feature \n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.4.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.2.2.1\">0.671</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.4.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.2.3.1\">0.640</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.4.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.2.4.1\">0.653</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.4.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.2.5.1\">0.492</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.4.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.2.6.1\">0.745</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.4.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.2.7.1\">0.528</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.4.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.2.8.1\">0.566</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 119 |
+
"capture": "Table 1: Performance on Downstream Tasks. Each value represents accuracy. Higher is Better. The performance of the baseline method shows that does not contain any information for solving the downstream tasks. GNNs take such uninformative node features only as node features. Nevertheless, the recovered feature is highly predictive. This indicates that GNNs create completely new and useful node features by themselves even when the input node features are uninformative. "
|
| 120 |
+
}
|
| 121 |
+
},
|
| 122 |
+
"image_paths": {
|
| 123 |
+
"1": {
|
| 124 |
+
"figure_path": "2301.10956v4_figure_1.png",
|
| 125 |
+
"caption": "Figure 1: (a) Traditional View (Rich Node Features). GNNs filter features by mixing them with neighboring nodes. (b) Traditional View (Uninformative Features). Filters cannot generate informative features if the inputs are not informative, i.e., garbage in, garbage out. (c) Our Results. GNNs create informative node features by themselves even when the input node features are uninformative by absorbing information from the underlying graph. (d) Illustrations of the Problem Setting. (d.1) Nodes have hidden features from which the input graph is generated. (d.2) The input to GNNs is a vanilla graph without any additional features. Nodes have coordinates for visualization in this panel, but these coordinates are neither fed to GNNs. (d.3) GNNs try to recover the hidden features.",
|
| 126 |
+
"url": "http://arxiv.org/html/2301.10956v4/x1.png"
|
| 127 |
+
},
|
| 128 |
+
"2": {
|
| 129 |
+
"figure_path": "2301.10956v4_figure_2.png",
|
| 130 |
+
"caption": "Figure 2: Illustrations of the Difficulty of Recovery. The input graph is 10101010-NN graph of the hidden features. The shortest path distance between points A and B is 21 hops, and the shortest path distance between points A and C is 18 hops. These distances indicate that point C is closer to point A than point B, but this is not the case in the true feature space. Standard node embedding methods would embed node C closer to A than node B to A, which is not consistent with the true feature. Embedding nodes that are close in the input graph close is the critical assumption in various embedding methods. This assumption does NOT hold in our situation. This disagreement is caused by the different scales of edges in sparse and dense regions. The difficulty lies in the fact that these scales are not directly available in the input information.",
|
| 131 |
+
"url": "http://arxiv.org/html/2301.10956v4/x2.png"
|
| 132 |
+
},
|
| 133 |
+
"3": {
|
| 134 |
+
"figure_path": "2301.10956v4_figure_3.png",
|
| 135 |
+
"caption": "Figure 3: Results for the Transductive Setting. Overall, the proposed method succeeded in recovering the ground truth hidden features while tSNE to \ud835\udc7f\ud835\udc7f{\\boldsymbol{X}}bold_italic_X (Eq. (5)) fails, and GINs and GATs are mediocre. (Top Left) The ground truth hidden embeddings. The node ids are numbered based on the x-coordinate and shown in the node colors. These node ids are for visualization purposes only and are NOT shown to GNNs and downstream algorithms. (Top Mid) The input graph constructed from the hidden features. The positions of the visualization are NOT shown to GNNs. (Top Right) tSNE plot on the synthetic node features, i.e., Eq. (5). These results indicate that the node features are not informative for feature recovery. This introduces challenges to the task. (Bottom Left) The recovered features by the proposed method. They resemble the ground truth not only with respect to the cluster structure but also the x-coordinates (shown in the node colors), the curved moon shapes in the two-moon dataset, and the striped pattern in the Adult dataset. The dGsubscript\ud835\udc51\ud835\udc3ad_{G}italic_d start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT value (Eq. (4)) is small, which indicates the success of the recovery and validates the theory. (Bottom Mid) The recovered features by GINs. They do not resemble the true hidden features. The dGsubscript\ud835\udc51\ud835\udc3ad_{G}italic_d start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT value is mediocre. (Bottom Right) The recovered features by GATs. They do not resemble hidden features, but some clusters are detected (shown in the node colors). The dGsubscript\ud835\udc51\ud835\udc3ad_{G}italic_d start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT value is mediocre. These results show that existing GNNs can extract some information from the graph structure, but they do not fully recover the hidden features.",
|
| 136 |
+
"url": "http://arxiv.org/html/2301.10956v4/x3.png"
|
| 137 |
+
},
|
| 138 |
+
"4": {
|
| 139 |
+
"figure_path": "2301.10956v4_figure_4.png",
|
| 140 |
+
"caption": "Figure 4: Results for the Inductive Setting. The legends and tendencies are the same as in Figure 3. The proposed method succeeded in generalizing to different sizes and keeping dGsubscript\ud835\udc51\ud835\udc3ad_{G}italic_d start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT low even in the extrapolation setting. GINs and GAT partially succeeded in extracting some of graph information, but they are not perfect, and dGsubscript\ud835\udc51\ud835\udc3ad_{G}italic_d start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT is moderately high.",
|
| 141 |
+
"url": "http://arxiv.org/html/2301.10956v4/x4.png"
|
| 142 |
+
}
|
| 143 |
+
},
|
| 144 |
+
"validation": true,
|
| 145 |
+
"references": [
|
| 146 |
+
{
|
| 147 |
+
"1": {
|
| 148 |
+
"title": "The surprising power of graph neural networks with random node initialization.",
|
| 149 |
+
"author": "Ralph Abboud, \u0130smail \u0130lkan Ceylan, Martin Grohe, and Thomas Lukasiewicz.",
|
| 150 |
+
"venue": "In Proceedings of the 30th International Joint Conference on Artificial Intelligence, IJCAI, pages 2112\u20132118, 2021.",
|
| 151 |
+
"url": null
|
| 152 |
+
}
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"2": {
|
| 156 |
+
"title": "Shortest path distance in random k-nearest neighbor graphs.",
|
| 157 |
+
"author": "Morteza Alamgir and Ulrike von Luxburg.",
|
| 158 |
+
"venue": "In Proceedings of the 29th International Conference on Machine Learning, ICML, 2012.",
|
| 159 |
+
"url": null
|
| 160 |
+
}
|
| 161 |
+
},
|
| 162 |
+
{
|
| 163 |
+
"3": {
|
| 164 |
+
"title": "Laplacian eigenmaps for dimensionality reduction and data representation.",
|
| 165 |
+
"author": "Mikhail Belkin and Partha Niyogi.",
|
| 166 |
+
"venue": "Neural Comput., 15(6):1373\u20131396, 2003.",
|
| 167 |
+
"url": null
|
| 168 |
+
}
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"4": {
|
| 172 |
+
"title": "Measuring and relieving the over-smoothing problem for graph neural networks from the topological view.",
|
| 173 |
+
"author": "Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun.",
|
| 174 |
+
"venue": "In Proceedings of the 34th AAAI Conference on Artificial Intelligence, AAAI, pages 3438\u20133445, 2020.",
|
| 175 |
+
"url": null
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"5": {
|
| 180 |
+
"title": "FastGCN: Fast learning with graph convolutional networks via importance sampling.",
|
| 181 |
+
"author": "Jie Chen, Tengfei Ma, and Cao Xiao.",
|
| 182 |
+
"venue": "In Proceedings of the 6th International Conference on Learning Representations, ICLR, 2018.",
|
| 183 |
+
"url": null
|
| 184 |
+
}
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"6": {
|
| 188 |
+
"title": "Simple and deep graph convolutional networks.",
|
| 189 |
+
"author": "Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li.",
|
| 190 |
+
"venue": "In Proceedings of the 37th International Conference on Machine Learning, ICML, pages 1725\u20131735, 2020.",
|
| 191 |
+
"url": null
|
| 192 |
+
}
|
| 193 |
+
},
|
| 194 |
+
{
|
| 195 |
+
"7": {
|
| 196 |
+
"title": "Discovering symbolic models from deep learning with inductive biases.",
|
| 197 |
+
"author": "Miles D. Cranmer, Alvaro Sanchez-Gonzalez, Peter W. Battaglia, Rui Xu, Kyle Cranmer, David N. Spergel, and Shirley Ho.",
|
| 198 |
+
"venue": "In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS, 2020.",
|
| 199 |
+
"url": null
|
| 200 |
+
}
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"8": {
|
| 204 |
+
"title": "Convolutional neural networks on graphs with fast localized spectral filtering.",
|
| 205 |
+
"author": "Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Vandergheynst.",
|
| 206 |
+
"venue": "In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, NeurIPS, pages 3837\u20133845, 2016.",
|
| 207 |
+
"url": null
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"9": {
|
| 212 |
+
"title": "Understanding the representation power of graph neural networks in learning graph topology.",
|
| 213 |
+
"author": "Nima Dehmamy, Albert-L\u00e1szl\u00f3 Barab\u00e1si, and Rose Yu.",
|
| 214 |
+
"venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, pages 15387\u201315397, 2019.",
|
| 215 |
+
"url": null
|
| 216 |
+
}
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"10": {
|
| 220 |
+
"title": "On random graphs i.",
|
| 221 |
+
"author": "Paul Erd\u0151s and Alfr\u00e9d R\u00e9nyi.",
|
| 222 |
+
"venue": "Publicationes mathematicae, 6(1):290\u2013297, 1959.",
|
| 223 |
+
"url": null
|
| 224 |
+
}
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"11": {
|
| 228 |
+
"title": "A fair comparison of graph neural networks for graph classification.",
|
| 229 |
+
"author": "Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli.",
|
| 230 |
+
"venue": "In Proceedings of the 8th International Conference on Learning Representations, ICLR, 2020.",
|
| 231 |
+
"url": null
|
| 232 |
+
}
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"12": {
|
| 236 |
+
"title": "Graph neural networks for social recommendation.",
|
| 237 |
+
"author": "Wenqi Fan, Yao Ma, Qing Li, Yuan He, Yihong Eric Zhao, Jiliang Tang, and Dawei Yin.",
|
| 238 |
+
"venue": "In The Web Conference 2019, WWW, pages 417\u2013426, 2019.",
|
| 239 |
+
"url": null
|
| 240 |
+
}
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"13": {
|
| 244 |
+
"title": "Generalization and representational limits of graph neural networks.",
|
| 245 |
+
"author": "Vikas K. Garg, Stefanie Jegelka, and Tommi S. Jaakkola.",
|
| 246 |
+
"venue": "In Proceedings of the 37th International Conference on Machine Learning, ICML, pages 3419\u20133430, 2020.",
|
| 247 |
+
"url": null
|
| 248 |
+
}
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"14": {
|
| 252 |
+
"title": "Neural message passing for quantum chemistry.",
|
| 253 |
+
"author": "Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl.",
|
| 254 |
+
"venue": "In Proceedings of the 34th International Conference on Machine Learning, ICML, pages 1263\u20131272, 2017.",
|
| 255 |
+
"url": null
|
| 256 |
+
}
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"15": {
|
| 260 |
+
"title": "A new model for learning in graph domains.",
|
| 261 |
+
"author": "Marco Gori, Gabriele Monfardini, and Franco Scarselli.",
|
| 262 |
+
"venue": "In Proceedings of the International Joint Conference on Neural Networks, IJCNN, volume 2, pages 729\u2013734, 2005.",
|
| 263 |
+
"url": null
|
| 264 |
+
}
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"16": {
|
| 268 |
+
"title": "Graph representation learning.",
|
| 269 |
+
"author": "William L Hamilton.",
|
| 270 |
+
"venue": "Synthesis Lectures on Artifical Intelligence and Machine Learning, 14(3):1\u2013159, 2020.",
|
| 271 |
+
"url": null
|
| 272 |
+
}
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"17": {
|
| 276 |
+
"title": "Inductive representation learning on large graphs.",
|
| 277 |
+
"author": "William L. Hamilton, Zhitao Ying, and Jure Leskovec.",
|
| 278 |
+
"venue": "In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, NeurIPS, pages 1024\u20131034, 2017.",
|
| 279 |
+
"url": null
|
| 280 |
+
}
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"18": {
|
| 284 |
+
"title": "Vision GNN: an image is worth graph of nodes.",
|
| 285 |
+
"author": "Kai Han, Yunhe Wang, Jianyuan Guo, Yehui Tang, and Enhua Wu.",
|
| 286 |
+
"venue": "In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS, 2022.",
|
| 287 |
+
"url": null
|
| 288 |
+
}
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"19": {
|
| 292 |
+
"title": "GCN-MF: disease-gene association identification by graph convolutional networks and matrix factorization.",
|
| 293 |
+
"author": "Peng Han, Peng Yang, Peilin Zhao, Shuo Shang, Yong Liu, Jiayu Zhou, Xin Gao, and Panos Kalnis.",
|
| 294 |
+
"venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD, pages 705\u2013713, 2019.",
|
| 295 |
+
"url": null
|
| 296 |
+
}
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"20": {
|
| 300 |
+
"title": "Metric recovery from directed unweighted graphs.",
|
| 301 |
+
"author": "Tatsunori B. Hashimoto, Yi Sun, and Tommi S. Jaakkola.",
|
| 302 |
+
"venue": "In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, AISTATS, 2015.",
|
| 303 |
+
"url": null
|
| 304 |
+
}
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"21": {
|
| 308 |
+
"title": "Lightgcn: Simplifying and powering graph convolution network for recommendation.",
|
| 309 |
+
"author": "Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yong-Dong Zhang, and Meng Wang.",
|
| 310 |
+
"venue": "In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR, pages 639\u2013648, 2020.",
|
| 311 |
+
"url": null
|
| 312 |
+
}
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"22": {
|
| 316 |
+
"title": "GPT-GNN: generative pre-training of graph neural networks.",
|
| 317 |
+
"author": "Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, and Yizhou Sun.",
|
| 318 |
+
"venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD, pages 1857\u20131867, 2020.",
|
| 319 |
+
"url": null
|
| 320 |
+
}
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"23": {
|
| 324 |
+
"title": "The procrustes program: Producing direct rotation to test a hypothesized factor structure.",
|
| 325 |
+
"author": "John R Hurley and Raymond B Cattell.",
|
| 326 |
+
"venue": "Behavioral science, 7(2):258, 1962.",
|
| 327 |
+
"url": null
|
| 328 |
+
}
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"24": {
|
| 332 |
+
"title": "Theory of graph neural networks: Representation and learning.",
|
| 333 |
+
"author": "Stefanie Jegelka.",
|
| 334 |
+
"venue": "arXiv, abs/2204.07697, 2022.",
|
| 335 |
+
"url": null
|
| 336 |
+
}
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"25": {
|
| 340 |
+
"title": "Adam: A method for stochastic optimization.",
|
| 341 |
+
"author": "Diederik P. Kingma and Jimmy Ba.",
|
| 342 |
+
"venue": "In Proceedings of the 3rd International Conference on Learning Representations, ICLR, 2015.",
|
| 343 |
+
"url": null
|
| 344 |
+
}
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"26": {
|
| 348 |
+
"title": "Auto-encoding variational bayes.",
|
| 349 |
+
"author": "Diederik P. Kingma and Max Welling.",
|
| 350 |
+
"venue": "In Proceedings of the 2nd International Conference on Learning Representations, ICLR, 2014.",
|
| 351 |
+
"url": null
|
| 352 |
+
}
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"27": {
|
| 356 |
+
"title": "Semi-supervised classification with graph convolutional networks.",
|
| 357 |
+
"author": "Thomas N. Kipf and Max Welling.",
|
| 358 |
+
"venue": "In Proceedings of the 5th International Conference on Learning Representations, ICLR, 2017.",
|
| 359 |
+
"url": null
|
| 360 |
+
}
|
| 361 |
+
},
|
| 362 |
+
{
|
| 363 |
+
"28": {
|
| 364 |
+
"title": "Predict then propagate: Graph neural networks meet personalized pagerank.",
|
| 365 |
+
"author": "Johannes Klicpera, Aleksandar Bojchevski, and Stephan G\u00fcnnemann.",
|
| 366 |
+
"venue": "In Proceedings of the 7th International Conference on Learning Representations, ICLR, 2019.",
|
| 367 |
+
"url": null
|
| 368 |
+
}
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"29": {
|
| 372 |
+
"title": "Deeper insights into graph convolutional networks for semi-supervised learning.",
|
| 373 |
+
"author": "Qimai Li, Zhichao Han, and Xiao-Ming Wu.",
|
| 374 |
+
"venue": "In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, AAAI, pages 3538\u20133545, 2018.",
|
| 375 |
+
"url": null
|
| 376 |
+
}
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"30": {
|
| 380 |
+
"title": "Structure-aware interactive graph neural networks for the prediction of protein-ligand binding affinity.",
|
| 381 |
+
"author": "Shuangli Li, Jingbo Zhou, Tong Xu, Liang Huang, Fan Wang, Haoyi Xiong, Weili Huang, Dejing Dou, and Hui Xiong.",
|
| 382 |
+
"venue": "In Proceedings of the 27th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD, pages 975\u2013985, 2021.",
|
| 383 |
+
"url": null
|
| 384 |
+
}
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"31": {
|
| 388 |
+
"title": "What graph neural networks cannot learn: depth vs width.",
|
| 389 |
+
"author": "Andreas Loukas.",
|
| 390 |
+
"venue": "In Proceedings of the 8th International Conference on Learning Representations, ICLR, 2020.",
|
| 391 |
+
"url": null
|
| 392 |
+
}
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"32": {
|
| 396 |
+
"title": "Provably powerful graph networks.",
|
| 397 |
+
"author": "Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman.",
|
| 398 |
+
"venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, pages 2153\u20132164, 2019.",
|
| 399 |
+
"url": null
|
| 400 |
+
}
|
| 401 |
+
},
|
| 402 |
+
{
|
| 403 |
+
"33": {
|
| 404 |
+
"title": "Invariant and equivariant graph networks.",
|
| 405 |
+
"author": "Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman.",
|
| 406 |
+
"venue": "In Proceedings of the 7th International Conference on Learning Representations, ICLR, 2019.",
|
| 407 |
+
"url": null
|
| 408 |
+
}
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"34": {
|
| 412 |
+
"title": "Weisfeiler and leman go neural: Higher-order graph neural networks.",
|
| 413 |
+
"author": "Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe.",
|
| 414 |
+
"venue": "In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, AAAI, pages 4602\u20134609, 2019.",
|
| 415 |
+
"url": null
|
| 416 |
+
}
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"35": {
|
| 420 |
+
"title": "Relational pooling for graph representations.",
|
| 421 |
+
"author": "Ryan L. Murphy, Balasubramaniam Srinivasan, Vinayak A. Rao, and Bruno Ribeiro.",
|
| 422 |
+
"venue": "In Proceedings of the 36th International Conference on Machine Learning, ICML, pages 4663\u20134673, 2019.",
|
| 423 |
+
"url": null
|
| 424 |
+
}
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"36": {
|
| 428 |
+
"title": "Revisiting graph neural networks: All we have is low-pass filters.",
|
| 429 |
+
"author": "Hoang NT and Takanori Maehara.",
|
| 430 |
+
"venue": "arXiv, abs/1905.09550, 2019.",
|
| 431 |
+
"url": null
|
| 432 |
+
}
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"37": {
|
| 436 |
+
"title": "Graph neural networks exponentially lose expressive power for node classification.",
|
| 437 |
+
"author": "Kenta Oono and Taiji Suzuki.",
|
| 438 |
+
"venue": "In Proceedings of the 8th International Conference on Learning Representations, ICLR, 2020.",
|
| 439 |
+
"url": null
|
| 440 |
+
}
|
| 441 |
+
},
|
| 442 |
+
{
|
| 443 |
+
"38": {
|
| 444 |
+
"title": "Learning mesh-based simulation with graph networks.",
|
| 445 |
+
"author": "Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter W. Battaglia.",
|
| 446 |
+
"venue": "In Proceedings of the 9th International Conference on Learning Representations, ICLR, 2021.",
|
| 447 |
+
"url": null
|
| 448 |
+
}
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"39": {
|
| 452 |
+
"title": "GCC: graph contrastive coding for graph neural network pre-training.",
|
| 453 |
+
"author": "Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang.",
|
| 454 |
+
"venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD, pages 1150\u20131160, 2020.",
|
| 455 |
+
"url": null
|
| 456 |
+
}
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"40": {
|
| 460 |
+
"title": "A survey on the expressive power of graph neural networks.",
|
| 461 |
+
"author": "Ryoma Sato.",
|
| 462 |
+
"venue": "arXiv, abs/2003.04078, 2020.",
|
| 463 |
+
"url": null
|
| 464 |
+
}
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"41": {
|
| 468 |
+
"title": "Towards principled user-side recommender systems.",
|
| 469 |
+
"author": "Ryoma Sato.",
|
| 470 |
+
"venue": "In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, CIKM, pages 1757\u20131766, 2022.",
|
| 471 |
+
"url": null
|
| 472 |
+
}
|
| 473 |
+
},
|
| 474 |
+
{
|
| 475 |
+
"42": {
|
| 476 |
+
"title": "Approximation ratios of graph neural networks for combinatorial problems.",
|
| 477 |
+
"author": "Ryoma Sato, Makoto Yamada, and Hisashi Kashima.",
|
| 478 |
+
"venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, pages 4083\u20134092, 2019.",
|
| 479 |
+
"url": null
|
| 480 |
+
}
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"43": {
|
| 484 |
+
"title": "Random features strengthen graph neural networks.",
|
| 485 |
+
"author": "Ryoma Sato, Makoto Yamada, and Hisashi Kashima.",
|
| 486 |
+
"venue": "In Proceedings of the 2021 SIAM International Conference on Data Mining, SDM, pages 333\u2013341, 2021.",
|
| 487 |
+
"url": null
|
| 488 |
+
}
|
| 489 |
+
},
|
| 490 |
+
{
|
| 491 |
+
"44": {
|
| 492 |
+
"title": "Constant time graph neural networks.",
|
| 493 |
+
"author": "Ryoma Sato, Makoto Yamada, and Hisashi Kashima.",
|
| 494 |
+
"venue": "ACM Trans. Knowl. Discov. Data, 16(5):92:1\u201392:31, 2022.",
|
| 495 |
+
"url": null
|
| 496 |
+
}
|
| 497 |
+
},
|
| 498 |
+
{
|
| 499 |
+
"45": {
|
| 500 |
+
"title": "The graph neural network model.",
|
| 501 |
+
"author": "Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.",
|
| 502 |
+
"venue": "IEEE Trans. Neural Networks, 20(1):61\u201380, 2009.",
|
| 503 |
+
"url": null
|
| 504 |
+
}
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"46": {
|
| 508 |
+
"title": "The vapnik-chervonenkis dimension of graph and recursive neural networks.",
|
| 509 |
+
"author": "Franco Scarselli, Ah Chung Tsoi, and Markus Hagenbuchner.",
|
| 510 |
+
"venue": "Neural Networks, 108:248\u2013259, 2018.",
|
| 511 |
+
"url": null
|
| 512 |
+
}
|
| 513 |
+
},
|
| 514 |
+
{
|
| 515 |
+
"47": {
|
| 516 |
+
"title": "A generalized solution of the orthogonal procrustes problem.",
|
| 517 |
+
"author": "Peter H Sch\u00f6nemann.",
|
| 518 |
+
"venue": "Psychometrika, 31(1):1\u201310, 1966.",
|
| 519 |
+
"url": null
|
| 520 |
+
}
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"48": {
|
| 524 |
+
"title": "Pitfalls of graph neural network evaluation.",
|
| 525 |
+
"author": "Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan G\u00fcnnemann.",
|
| 526 |
+
"venue": "arXiv, 2018.",
|
| 527 |
+
"url": null
|
| 528 |
+
}
|
| 529 |
+
},
|
| 530 |
+
{
|
| 531 |
+
"49": {
|
| 532 |
+
"title": "Studies in the robustness of multidimensional scaling: Procrustes statistics.",
|
| 533 |
+
"author": "Robin Sibson.",
|
| 534 |
+
"venue": "Journal of the Royal Statistical Society: Series B (Methodological), 40(2):234\u2013238, 1978.",
|
| 535 |
+
"url": null
|
| 536 |
+
}
|
| 537 |
+
},
|
| 538 |
+
{
|
| 539 |
+
"50": {
|
| 540 |
+
"title": "Studies in the robustness of multidimensional scaling: Perturbational analysis of classical scaling.",
|
| 541 |
+
"author": "Robin Sibson.",
|
| 542 |
+
"venue": "Journal of the Royal Statistical Society: Series B (Methodological), 41(2):217\u2013229, 1979.",
|
| 543 |
+
"url": null
|
| 544 |
+
}
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"51": {
|
| 548 |
+
"title": "Diffusion processes with boundary conditions.",
|
| 549 |
+
"author": "Daniel W Stroock and SR Srinivasa Varadhan.",
|
| 550 |
+
"venue": "Communications on Pure and Applied Mathematics, 24(2):147\u2013225, 1971.",
|
| 551 |
+
"url": null
|
| 552 |
+
}
|
| 553 |
+
},
|
| 554 |
+
{
|
| 555 |
+
"52": {
|
| 556 |
+
"title": "Consistent latent position estimation and vertex classification for random dot product graphs.",
|
| 557 |
+
"author": "Daniel L. Sussman, Minh Tang, and Carey E. Priebe.",
|
| 558 |
+
"venue": "IEEE Trans. Pattern Anal. Mach. Intell., 36(1):48\u201357, 2014.",
|
| 559 |
+
"url": null
|
| 560 |
+
}
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"53": {
|
| 564 |
+
"title": "A global geometric framework for nonlinear dimensionality reduction.",
|
| 565 |
+
"author": "Joshua B Tenenbaum, Vin de Silva, and John C Langford.",
|
| 566 |
+
"venue": "science, 290(5500):2319\u20132323, 2000.",
|
| 567 |
+
"url": null
|
| 568 |
+
}
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"54": {
|
| 572 |
+
"title": "Local ordinal embedding.",
|
| 573 |
+
"author": "Yoshikazu Terada and Ulrike von Luxburg.",
|
| 574 |
+
"venue": "In Proceedings of the 31st International Conference on Machine Learning, ICML, pages 847\u2013855, 2014.",
|
| 575 |
+
"url": null
|
| 576 |
+
}
|
| 577 |
+
},
|
| 578 |
+
{
|
| 579 |
+
"55": {
|
| 580 |
+
"title": "Graph attention networks.",
|
| 581 |
+
"author": "Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li\u00f2, and Yoshua Bengio.",
|
| 582 |
+
"venue": "In Proceedings of the 6th International Conference on Learning Representations, ICLR, 2018.",
|
| 583 |
+
"url": null
|
| 584 |
+
}
|
| 585 |
+
},
|
| 586 |
+
{
|
| 587 |
+
"56": {
|
| 588 |
+
"title": "Deep graph infomax.",
|
| 589 |
+
"author": "Petar Velickovic, William Fedus, William L. Hamilton, Pietro Li\u00f2, Yoshua Bengio, and R. Devon Hjelm.",
|
| 590 |
+
"venue": "In Proceedings of the 7th International Conference on Learning Representations, ICLR, 2019.",
|
| 591 |
+
"url": null
|
| 592 |
+
}
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"57": {
|
| 596 |
+
"title": "Density estimation from unweighted k-nearest neighbor graphs: a roadmap.",
|
| 597 |
+
"author": "Ulrike von Luxburg and Morteza Alamgir.",
|
| 598 |
+
"venue": "In Advances in Neural Information Processing Systems 26: Annual Conference on Neural Information Processing Systems 2013, NeurIPS, pages 225\u2013233, 2013.",
|
| 599 |
+
"url": null
|
| 600 |
+
}
|
| 601 |
+
},
|
| 602 |
+
{
|
| 603 |
+
"58": {
|
| 604 |
+
"title": "AM-GCN: adaptive multi-channel graph convolutional networks.",
|
| 605 |
+
"author": "Xiao Wang, Meiqi Zhu, Deyu Bo, Peng Cui, Chuan Shi, and Jian Pei.",
|
| 606 |
+
"venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD, pages 1243\u20131253, 2020.",
|
| 607 |
+
"url": null
|
| 608 |
+
}
|
| 609 |
+
},
|
| 610 |
+
{
|
| 611 |
+
"59": {
|
| 612 |
+
"title": "Traffic flow prediction via spatial temporal graph neural network.",
|
| 613 |
+
"author": "Xiaoyang Wang, Yao Ma, Yiqi Wang, Wei Jin, Xin Wang, Jiliang Tang, Caiyan Jia, and Jian Yu.",
|
| 614 |
+
"venue": "In The Web Conference 2020, WWW, pages 1082\u20131092, 2020.",
|
| 615 |
+
"url": null
|
| 616 |
+
}
|
| 617 |
+
},
|
| 618 |
+
{
|
| 619 |
+
"60": {
|
| 620 |
+
"title": "Simplifying graph convolutional networks.",
|
| 621 |
+
"author": "Felix Wu, Amauri H. Souza Jr., Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Q. Weinberger.",
|
| 622 |
+
"venue": "In Proceedings of the 36th International Conference on Machine Learning, ICML, pages 6861\u20136871, 2019.",
|
| 623 |
+
"url": null
|
| 624 |
+
}
|
| 625 |
+
},
|
| 626 |
+
{
|
| 627 |
+
"61": {
|
| 628 |
+
"title": "How powerful are graph neural networks?",
|
| 629 |
+
"author": "Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka.",
|
| 630 |
+
"venue": "In Proceedings of the 7th International Conference on Learning Representations, ICLR, 2019.",
|
| 631 |
+
"url": null
|
| 632 |
+
}
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"62": {
|
| 636 |
+
"title": "What can neural networks reason about?",
|
| 637 |
+
"author": "Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka.",
|
| 638 |
+
"venue": "In Proceedings of the 8th International Conference on Learning Representations, ICLR, 2020.",
|
| 639 |
+
"url": null
|
| 640 |
+
}
|
| 641 |
+
},
|
| 642 |
+
{
|
| 643 |
+
"63": {
|
| 644 |
+
"title": "Revisiting semi-supervised learning with graph embeddings.",
|
| 645 |
+
"author": "Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov.",
|
| 646 |
+
"venue": "In Proceedings of the 33rd International Conference on Machine Learning, ICML, pages 40\u201348, 2016.",
|
| 647 |
+
"url": null
|
| 648 |
+
}
|
| 649 |
+
},
|
| 650 |
+
{
|
| 651 |
+
"64": {
|
| 652 |
+
"title": "When does self-supervision help graph convolutional networks?",
|
| 653 |
+
"author": "Yuning You, Tianlong Chen, Zhangyang Wang, and Yang Shen.",
|
| 654 |
+
"venue": "In Proceedings of the 37th International Conference on Machine Learning, ICML, pages 10871\u201310880, 2020.",
|
| 655 |
+
"url": null
|
| 656 |
+
}
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"65": {
|
| 660 |
+
"title": "Link prediction based on graph neural networks.",
|
| 661 |
+
"author": "Muhan Zhang and Yixin Chen.",
|
| 662 |
+
"venue": "In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS, pages 5171\u20135181, 2018.",
|
| 663 |
+
"url": null
|
| 664 |
+
}
|
| 665 |
+
},
|
| 666 |
+
{
|
| 667 |
+
"66": {
|
| 668 |
+
"title": "Layer-dependent importance sampling for training deep and large graph convolutional networks.",
|
| 669 |
+
"author": "Difan Zou, Ziniu Hu, Yewen Wang, Song Jiang, Yizhou Sun, and Quanquan Gu.",
|
| 670 |
+
"venue": "In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, pages 11247\u201311256, 2019.",
|
| 671 |
+
"url": null
|
| 672 |
+
}
|
| 673 |
+
}
|
| 674 |
+
],
|
| 675 |
+
"url": "http://arxiv.org/html/2301.10956v4"
|
| 676 |
+
}
|
20240323/2301.12528v2.json
ADDED
|
@@ -0,0 +1,227 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Sequential Estimation of Gaussian Process-based Deep State-Space Models",
|
| 3 |
+
"abstract": "We consider the problem of sequential estimation of the unknowns of state-space and deep state-space models that include estimation of functions and latent processes of the models. The proposed approach relies on Gaussian and deep Gaussian processes that are implemented via random feature-based Gaussian processes. In these models, we have two sets of unknowns, highly nonlinear unknowns (the values of the latent processes) and conditionally linear unknowns (the constant parameters of the random feature-based Gaussian processes). We present a method based on particle filtering where the parameters of the random feature-based Gaussian processes are integrated out in obtaining the predictive density of the states and do not need particles. We also propose an ensemble version of the method, with each member of the ensemble having its own set of features. With several experiments, we show that the method can track the latent processes up to a scale and rotation.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In the last decade, the field of machine learning has seen an exceptional surge and unthinkable accomplishments [1 ###reference_b1###, 2 ###reference_b2###]. One might argue that the main reason behind its major advances has been the much improved capabilities of deep neural networks over classical machine learning methods. Enabled by their structures of multiple processing layers, deep neural networks can learn representations of data at various levels of abstraction [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###].\nAn area of machine learning that undergoes continued growth is Gaussian processes (GPs), which are now routinely employed in solving hard machine learning problems.\nThe reason for this is that they provide a principled, practical, and probabilistic approach to learning [6 ###reference_b6###]. Further, they are flexible, non-parametric, and computationally rather simple. They are used within a Bayesian framework that often leads to powerful methods which also offer valid estimates of uncertainties in predictions and generic model selection procedures [7 ###reference_b7###]. Their main drawback of computational scaling has recently been alleviated by the introduction of generic sparse approximations [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###].\nGPs have also been used for building dynamical models [11 ###reference_b11###]. Because of their beneficial properties, including\nbias-variance trade-off and their Bayesian\nframework, they, too, have become a tool for system\nidentification [12 ###reference_b12###]. The GP-based state-space models (GP-SSMs) describe dynamical systems, where one GP models a state process [13 ###reference_b13###] and another GP models the function between the states and the observations [11 ###reference_b11###].\nIn the literature, there have been various approaches to inference of GP-SSMs.\nFor example, [14 ###reference_b14###] and [15 ###reference_b15###] discussed the combination of GP inference with various filters such as particle filters, extended Kalman filters, and unscented particle filters. Based on the reported results, the particle filters were generally the most accurate. However, the estimation of GPs in [14 ###reference_b14###] and [15 ###reference_b15###] requires the inversion of kernel matrices, which needs cubic time complexity. A computationally efficient way of GP-based inference was researched in [16 ###reference_b16###], and it is based on approximating the GPs in feature spaces with numerous basis functions. The authors used particle Gibbs samplers for all the unknowns. In other words, they did not only sample the particles of the latent states but also the weight vectors and the hyperparameters, hence increasing the computational burden. Another family of efficient estimation of GP-SSMs is based on variational inference. In [17 ###reference_b17###], [18 ###reference_b18###], and [19 ###reference_b19###], different evidence lower bounds (ELBOs) were designed and then optimized. These methods, however, are not sequential or online. In our work we adopted PF for estimating the hidden processes because this methodology is sequential in nature and has the capacity to perform estimation in highly nonlinear and nonstationary settings with any computable probability distributions. PF has also been used in a framework where the state-transition function of a model is parameterized using reproducing kernels [20 ###reference_b20###]. Our approach in this paper can be adapted to other types of filters.\nIf the functions are described by deep mappings such as deep GPs, the resulting model is referred to as a GP-based deep state-space model (GP-DSSM) [21 ###reference_b21###].\nThe analytical filters mentioned above are still applicable in deep state-space models (DSSMs). Solutions to DSSMs can be based on Rao-Blackwellized particle filters [22 ###reference_b22###, 23 ###reference_b23###] and mixture Kalman filters [24 ###reference_b24###]. A subclass of DSSMs can be built by extending variational autoencoders (VAEs) as in [25 ###reference_b25###]. The building blocks for these models are recurrent neural networks (RNNs) and VAEs.\nRecently, methods for probabilistic forecasting of time series based on RNNs have been proposed [26 ###reference_b26###]. The objective was to learn complex patterns from raw data by an RNN combined with a parameterized per-time-series linear state-space model. Additional efforts with similar objectives and methodologies were reported in [27 ###reference_b27###]. In [28 ###reference_b28###], a global-local method based on deep factor models with random effects was explored. DSSMs were also used to\nconstruct deep GPs by hierarchically putting transformed GP priors on the length scales and magnitudes of the next level of GPs in the hierarchy [29 ###reference_b29###]. All these methods are different from the ones we propose here.\nOne way of broadening the function\nspace of a GP is by introducing an ensemble of GPs [30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###, 33 ###reference_b33###].\nEach GP may rely on all or on a subset of training samples and may use a unique kernel to make predictions. Ensembles of GPs have also been used for combining global approximants with local GPs [9 ###reference_b9###, 34 ###reference_b34###]. In [35 ###reference_b35###], an ensemble of GPs was used for online interactive learning.\nWe address the problem of constructing dynamic deep probabilistic latent variable models. The underlying idea is that, unlike standard state-space models, we work with DSSMs, where the variables in the intermediate layers are independently conditioned on the states from the deeper layers, and the dynamics are generated by the process from the deepest layer, the root process. An important task of inference is the estimation of the unknowns of the model, which include the underlying parameters of the GPs and the state (latent) processes of the model.\nThe contributions of the paper are as follows:\na novel kernel-based method that identifies non-linear state-space systems without any information about the functions that govern the latent and observation processes,\nextension of the state-space models to deep structures to improve the model capacity and reveal more information about the studied phenomena, and\nensemble learning to reduce the variances of the estimates of the latent processes and the predictions of the observations."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Background",
|
| 15 |
+
"text": "In this section, for a self-sustained presentation, we provide some background on the methodologies that are the main ingredients of the proposed solutions in this paper."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Gaussian Processes",
|
| 21 |
+
"text": "A GP, written as , is, in essence, a distribution over functions, where is a mean function, is a kernel or covariance\nfunction, and is a vector of hyperparameters that parameterize the kernel.\nTo simplify the notation, we express a GP as or as , if is emphasized. For any set of inputs in the domain of a real-valued function , the function values are Gaussian distributed, i.e.,\nwhere is the mean and . Given the observations f on X, the predictive distribution of at new inputs is given by [6 ###reference_b6###]\nwith a predictive mean and covariance obtained by"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Random Feature-Based Gaussian Processes",
|
| 27 |
+
"text": "GPs do not scale up well with , the number of input-output pairs. We observe that in (3 ###reference_###), one has to invert the matrix , which for large values of becomes an issue. To ameliorate the problem, we resort to approximations by exploiting the concept of sparsity.\nCompared with approximations in a function space, a GP with a shift-invariant kernel has another way of approximation, one that focuses on a feature space [36 ###reference_b36###]. By utilizing feature spaces, the computations do not require matrix decompositions but only matrix multiplications. The vector of random features is comprised of trigonometric functions that are defined by\nwhere is a set of samples randomly drawn from the power spectral density of the kernel of the GP. Then the kernel function can be approximated by if the kernel is shift-invariant. It brings a type of GP approximation according to\nwhere are parameters of the approximating model."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "II-C Bayesian Linear Regression",
|
| 33 |
+
"text": "In view of the model given by (5 ###reference_###), we provide a brief review of Bayesian linear regression. Consider the following model:\nwhere is a scalar observation, is a zero-mean Gaussian random noise, i.e., , with being unknown, is a known feature vector, and is an unknown parameter vector. We assume that and have a joint prior given by the multivariate normal\u2013inverted Gamma distribution, i.e.,\nwhere , and are parameters of the prior probability density function (pdf), and where and . One can show that the predictive distribution of is given by a Student\u2019s -distribution [37 ###reference_b37###], that is,\nwhere\nThus, for the linear model in (6 ###reference_###), when the prior of and is given by (7 ###reference_###), we have an analytical expression for the predictive distribution of .\nFor the posterior of and \nwe have\nwhere\nClearly, the posterior pdf is also a multivariate normal\u2013inverse Gamma pdf with parameters , and , which are updated from and using (13 ###reference_###), (14 ###reference_###), (15 ###reference_###) and (11 ###reference_###), respectively."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "II-D Particle Filtering",
|
| 39 |
+
"text": "In the proposed approach, we will use concepts from particle filtering theory, and in this subsection, we provide the basics of it. Particle filters have the capacity to work sequentially with highly nonlinear models. In many signal processing problems, we aim at tracking a latent process of a state-space model given by\nwhere is a discrete-time index, and is an observation process. Typically, the main objective of PF is to obtain the filtering pdf from\nIn brief, particle filters approximate the pdfs of interest by discrete random measures, where the support of a pdf is given by a set of particles and where each particle is given a weight following fundamental principles. PF is implemented as follows [38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###]. Suppose that at time the filtering density is approximated by\nwhere the symbol represents the th particle (sample) of , is the Dirac delta function, and is the number of particles. Then we can obtain from \nby implementing three steps:\nGenerate particles from the predictive pdf of , i.e.,\nCompute the weights of the particles according to the likelihood of , or\nand where\nThe approximation of is then given by\nResample the particles using their weights and construct a posterior of with equal weights and where some of the particles are replicated [41 ###reference_b41###]."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "III Gaussian Process State Space Model",
|
| 45 |
+
"text": "Now we introduce the GP-based state space model. Suppose the observation process is produced by a state-space model defined by\nwhere (23 ###reference_###) represents the latent state transition equation with the state vector at time instant , and (24 ###reference_###) is the observation equation with being the vector of observations at time instant . The symbols and represent Gaussian distributed errors (noises). A generic graphical representation of an SSM is shown in Fig. 1.\nNext, we express the above two equations using random feature-based GPs. In that case, we write them according to\nwhere the parameters are given by the elements of the matrices , , and , . The parameters and are initialized by Gaussian priors and updated by following Bayesian rules.\nThus, each dimension of and is modeled by its own set of parameters. Further, note that the feature vectors and in (25 ###reference_###) and (26 ###reference_###) are different because they are defined by different sets of samples,\n and , respectively. To simplify the notation, we use and . We assume that the parameter variables are all independent, i.e., the columns of H and are independent of the remaining columns. The noises and are i.i.d. zero-mean Gaussians, where and , with and\n.\nThe model described by (25 ###reference_###) and (26 ###reference_###) contains many unknowns, that is, the vector processes , , the parameter matrices H and , and the noise variances , and , . Conditioned on , the model is of the same form as the one in (6 ###reference_###), whereas conditioned on H and , the model given by (25 ###reference_###) and (26 ###reference_###) is very nonlinear in .\nOn account of the intractable analytical inference, we resort to PF to estimate sequentially the latent states. Given the estimated states, we update the joint distributions of\nH and and of and , respectively. For these updates, we apply Bayesian linear regressions,\nwhere we use multivariate normal\u2013inverse Gamma pdfs for the joint priors of and , respectively.\nNext, we explain how we implement the following:\nthe propagation of ,\nthe updating of the joint posteriors of and , for , ,\nthe updating of the joint posteriors of and , for , , and\nthe weight computation of the particles and the estimation of .\nSuppose that before propagating the samples of the latent process at time , we have particles of , , . Assume also that for each stream of particles at we have the joint posterior of and , which is a multivariate normal\u2013inverted Gamma pdf with parameters and . Further, we have the joint posterior of and , which is also a multivariate normal\u2013inverted Gamma pdf and with parameters and ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-A Propagation of the particles",
|
| 51 |
+
"text": "We generate the elements of the particles , , from respective univariate Student\u2019s -distributions given by (see also (8 ###reference_###))\nwhere , and\nThus, the propagation includes generating particles by (27 ###reference_###). For each dimension of , we sample particles (thus, we have a total of particles), and they represent the support of ."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-B Updating of the joint posteriors of",
|
| 57 |
+
"text": "The joint posterior of is a multivariate normal\u2013inverted Gamma pdf with parameters and We update by (31 ###reference_###), and we find the remaining parameters recursively by"
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.3",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "III-C Updating of the joint posteriors of",
|
| 63 |
+
"text": "The proposed method also requires updating of the joint posteriors of and for , and .\nThe joint posterior of is a multivariate normal\u2013inverted Gamma pdf with parameters and Upon receiving , these parameters are updated by"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.4",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "III-D Weight computation of particles and estimation of",
|
| 69 |
+
"text": "We need to assign weights to each particle according to the likelihood of . The computation proceeds according to\nwhere is the likelihood of given , , and , and where represents all the particles generated in the th stream up to time instant , stands for all the vector observations up to time instant , and is the non-normalized weight of .\nWe obtain the likelihood by exploiting (26 ###reference_###), where we use the made assumption that is Gaussian. We find that is a product of Student\u2019s -distributions, i.e.,\nwhere , and\nOnce we compute the non-normalized weights by (39 ###reference_###), we normalize them according to\nAfter normalizing the weights, the minimum mean square estimate (MMSE) of is obtained by\nThe approximation of the posterior is then given by\nFinally, we resample particles from to obtain the particles that will be used for propagation in the next time instant .\nThe complete procedure is summarized by Algorithm 1 ###reference_###. We point out that an alternative algorithm can be applied where all the particles share the same parameters H and ."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "IV Ensemble Learning",
|
| 75 |
+
"text": "The use of only a single set of random samples, , might not be sufficiently accurate. In order to mitigate the problem, we introduce an ensemble of different sets of and then combine the results obtained by each set.\nLet be a shift-invariant kernel from a known kernel dictionary . Ideally, should be built as large as computational resources allow. We create the sets by sampling from the power spectral density of each kernel candidate .\nFor estimating the latent state, we use these sets as follows.\nIf is the th set, the posterior contribution or weight of the GP based on the th set to the estimate of the latent state at time is . Then, the predictive density of at time \nis obtained from\nwhere is the total number of sets and where the posterior weight is updated by"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.1",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "IV-A Ensemble Estimates of the States",
|
| 81 |
+
"text": "The ensemble estimate of the latent states is given by the mixture\nWe point out that the estimates of the latent states by random feature-based methods are identifiable up to a scale, shift, and rotation [42 ###reference_b42###]. Thus, to fuse the state estimates, we have to force the estimators of all the ensemble members into the same coordinate base. To that end, we arbitrarily fix the rotation of by taking the singular value decomposition (SVD) of the MMSE estimate, , and setting the new estimated as the columns of the left singular vectors U with largest singular values. Then we mirror and rotate all the candidate latent states so that they have the same pattern. First, we set a guidance point with respect to a specific time . Then we rotate all the latent states to make sure that is \u201coverlapped\u201d with . Finally, we take the weighted average of as the ensemble estimate of ."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.2",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-B Keep and Drop",
|
| 87 |
+
"text": "We use the individual estimates of the ensemble members to improve on their respective estimates.\nIn practice, if we do not take precautionary measures, only a small portion of them would remain with significant weights.\nFor this reason, we remove the members with small weights using the principle of resampling\nand replace them with members that perform much better. With replacements, we reduce the diversity of features in the ensemble but increase the number of particles that explore the spaces of the latent processes with the features of the replicated members.\nFurther, we note that candidate models need to be trained at the beginning. For this stage, we fix the weights to at the beginning until ."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Gaussian Process-based Deep State-Space Models",
|
| 93 |
+
"text": "One of the advantages of deep structures is to use one or a few simple nonlinear activation functions to improve the approximation of unknown highly nonlinear target functions. In the field of signal processing, the advantage of deep structures of state-space models is that with more hidden layers we can improve the modeling capacity of the model.\nTypically, the state processes of the hidden layers will be of different dimensions, and in some settings, the deep models can be justified using arguments that reflect our understanding of the phenomena we model. A generic diagram of a deep SSM with layers is shown in Fig. 2.\nBorrowing from concepts of deep learning, we introduce a Gaussian process-based deep state-space model (GP-DSSM). This model uses one simple kernel that is combined with a deep structure to approximate the unknown target kernel. Formally, we express a GP-DSSM with hidden layers as follows:\nwhere , are latent processes, is a vector of observations, are the feature functions embedded with different for every layer, has the same meaning as before, and are parameter variables, and and are perturbations.\nWe refer to the deepest latent process (defined by (50 ###reference_###)) as the root process of the model.\nHere we assume that the dimensions of the latent processes are predefined. The objective of inference is to estimate all the latent processes , and all the parameters of the model H and , .\nThe inference method and procedures are very similar to the method we described for the ordinary GP-SSM."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.1",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Propagation of the particles in all layers",
|
| 99 |
+
"text": "At time , first we propagate the particles from to and then the particles of the remaining latent processes . In propagating these particles, we apply analogous Student\u2019s -distributions as in (8 ###reference_###), i.e.,\nwhere represents the latent states in all the layers up to time , and where the parameters , and are defined similarly as in (28 ###reference_###) \u2013 (31 ###reference_###)."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5.2",
|
| 103 |
+
"parent_section_id": "5",
|
| 104 |
+
"section_name": "Updating of the joint posteriors of and",
|
| 105 |
+
"text": "These updates follow the schemes described by (32 ###reference_###)\u2013(34 ###reference_###) and (35 ###reference_###)\u2013(38 ###reference_###), respectively. We note that these updates can be performed in parallel once all the particles in all the layers have been propagated."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "5.3",
|
| 109 |
+
"parent_section_id": "5",
|
| 110 |
+
"section_name": "Weight computation of particles and estimation of the latent processes",
|
| 111 |
+
"text": "We assign weights to the particles according to the likelihoods of the particles, that is, we use\nwhere is the likelihood of given , , and is the non-normalized weight of . The computation of this weight is carried out via a Student\u2019s -distribution\nof the form as in (40 ###reference_###) and whose parameters are from expressions analogous to (41 ###reference_###)\u2013(43 ###reference_###). Upon the computation of the weights, we normalize them as per (44 ###reference_###). Clearly, these weights directly depend on only and not on the particles from the previous layers. Finally, the minimum mean square estimate (MMSE) of is computed by (45 ###reference_###) and the approximation of the posterior is given by (46 ###reference_###).\nThe estimates of the remaining processes is carried out by first computing the weights of the particles of the corresponding processes. For example, for computing the weights of , we use\nwhere is the estimate of .\nFrom the particles and their corresponding weights, we then compute .\nWe continue in the same vein by estimating one process value at a time until we complete these steps with estimating the value of the root process.\nBefore we proceed to process the next observation, we resample the streams using their respective weights . One approach to computing these weights ise based on the following expression:\nFor the unknown in this equation, we could use their respective MMSE estimates . Another approach is based on approximating the factors in (56 ###reference_###) with the average likelihood, that is, with\nwhere are the weights associated with the particles .\nBy combining equation (56 ###reference_###) and (57 ###reference_###), we obtain"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "6",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "VI Experiments",
|
| 117 |
+
"text": "We tested the performance of the proposed method with several experiments. In all the experiments, we applied the ensemble method with members. Specifically, the elements of the sets were randomly sampled from the power spectral density of RBF kernels with prior length scale vectors and prior variances , where the elements of were independently sampled from the discrete set . Note that represents the hyperparameters and includes the length scale vector and the prior variances."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "6.1",
|
| 121 |
+
"parent_section_id": "6",
|
| 122 |
+
"section_name": "VI-A A test with and",
|
| 123 |
+
"text": "###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### In the first experiment, we tested the inference of a GP-SSM when . More specifically, we generated data from an SSM with and according to the following model:\nThe generated data set contained 2,000 samples with , and for initializing the estimation, we used samples. For drawing random vectors needed in the construction of the random features in (II-B ###reference_###), we used .\nFor comparison purposes, all the signals were normalized from to . We emphasize again that the functions in the state and observation equations are unknown.\nFigure 4 ###reference_### shows the last samples of the actual latent process in red line, its estimate in grey line, and the 95 confidence interval depicted by the blue region. Figure 4 ###reference_### exhibits the estimates of the latent states under 20 runs with different seeds.\nThe results suggest that with our approach we can provide accurate estimates of the latent states and thus can capture their dynamics. Further, we can quantify the uncertainties of the estimates.\nRecall from Section IV-A ###reference_### that we use the SVD to standardize both the actual and estimated states.\nFigure 6 ###reference_### illustrates the true pairs and estimated pairs before applying SVD. The scaled actual and estimated states after SVD are shown in Fig. 6 ###reference_###.\nTo assess the performance of our method further,\nin Fig. 8 ###reference_### we present the results of the first 100 samples, demonstrating the consistent performance of our method. It is important to note that the samples from to the end have been standardized simultaneously, ensuring a consistent standardization approach across the entire test set. Thus, the first and last 100 samples have not been separately standardized. In addition, in Fig. 8 ###reference_### we show the root mean square errors (RMSEs) of the estimated latent states.\nHere we provide motivation and insights for using Student\u2019s t-distributions rather than Gaussian ones. In our previous work, [43 ###reference_b43###], we considered the variances of the Gaussians as vectors that are optimized by gradient descent algorithms. However, bad initial values of variances would incur huge bias because of the risk of not converging. Therefore, we used Student\u2019s t-distributions to account for the uncertainty of the variances . Further, the Student\u2019s t-distribution allows for a closed-form formulation of the variance updates. To make a comparison between the models with Student\u2019s t and Gaussian distributions, we conducted the following experiment. We assumed that the prior information provides initial values of Gaussian variances with 0.1, while the actual variances were . The model with Gaussians had a much worse performance in accuracy and had increased computing time compared to the model with Student\u2019s t distributions. The results are shown in Figs. 10 ###reference_### and 10 ###reference_###. The red lines show the RMSEs and MNLLs under the model with Gaussians, whereas the grey lines represent our proposed model with Student\u2019s t distributions. We reiterate that the model with Gaussians requires more time to run for the same number of samples.\n###figure_7### ###figure_8###"
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "6.2",
|
| 127 |
+
"parent_section_id": "6",
|
| 128 |
+
"section_name": "VI-B A test with and",
|
| 129 |
+
"text": "In the next experiment, we tested the GP-SSM when . We wanted to demonstrate the ability of our model to learn lower dimensional processes from high dimensional observation signals. Our generative model had and and was of the form\nwhere , ,\n and . The elements of and were randomly generated from to , and the entries of and were also randomly drawn from to . The hyperparameters were set to be the same as in the above section. Figure 11 ###reference_### shows\n and of the last samples.\nThe results indicate that even for the signals with high frequency and dimensions, our model can adjust quickly.\n###figure_9###"
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "6.3",
|
| 133 |
+
"parent_section_id": "6",
|
| 134 |
+
"section_name": "VI-C The need for a deep model",
|
| 135 |
+
"text": "Do we need GP-DSSMs? This experiment shows that the answer is positive, especially when the selected kernel for the GP may not have enough capacity to learn.\nWe validated this by an experiment where and .\nWe generated the observation process as a GP whose kernel was a superposition of a dot-product and a Mat\u00e9rn kernel.\nThe dot-product kernel was a non-stationary kernel with a hyperparameter ,\nand the Mat\u00e9rn kernel was set with hyperparameters and length scales .\nThe outputs were normalized before being used by our model. In mathematical terms, the transition and observation processes were obtained by\nwhere , and actually denotes to simplify the notation. The function\n is the GP with a dot-product kernel\nadding a Mat\u00e9rn kernel, and is a sine function when is odd while a cosine when is even.\nThe noises and had the same variances .\nThe remaining parameters were and .\nWe implemented four models, from one-hidden layer to four-hidden layers. Figure 12 ###reference_### shows the RMSEs of the four models. The model with two-hidden layers achieves the minimum RMSEs. Note that the behavior of the RMSEs relative to the number of hidden layers is similar to a concave function, consistent with the conclusion from [44 ###reference_b44###], i.e., that the RMSEs decrease and then increase with the number of hidden layers increasing. From the conclusion in [45 ###reference_b45###], we might expect that deeper models would be better when we increase the number of parameters such as and .\n###figure_10###"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "6.4",
|
| 139 |
+
"parent_section_id": "6",
|
| 140 |
+
"section_name": "VI-D Testing the performance of GP-DSSM",
|
| 141 |
+
"text": "###figure_11### We generated data from a DSS model with two hidden layers, with , , and . The model is given by the following two layers of latent processes:\nand four observation processes given by\nThis model is identical to the one from Sec. V-B of [43 ###reference_b43###], except that we changed the first equation in [43 ###reference_b43###]\nto\nWe made this change to force the latent process to become less smooth, thus making it harder for estimation. The results are shown in Fig. 13 ###reference_###. Evidently, they show that the proposed method is capable of accurately estimating all the latent processes even though the latent processes are much more jagged. Further, compared with the results in [43 ###reference_b43###] which had persistent lags in the estimated processes, the results from the method presented here showed almost no lags."
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "6.5",
|
| 145 |
+
"parent_section_id": "6",
|
| 146 |
+
"section_name": "VI-E Testing on real-world data sets",
|
| 147 |
+
"text": "We also assessed the performance of our model on five real-world data sets [46 ###reference_b46###] and compared it to six state-of-the-art models. The suite of reference methods is composed of (1) two one-step ahead autoregressive GP models: GP-NARX [13 ###reference_b13###] and NIGP [47 ###reference_b47###], (2) three multi-step-ahead autoregressive and recurrent GP\nmodels in latent spaces: REVARB with one and two hidden layers [48 ###reference_b48###] and MSGP [49 ###reference_b49###], and (3) two GP-SSMs, based on a full Markovian state:\nSS-GP-SSM [50 ###reference_b50###] and PR-SSM [46 ###reference_b46###]. Specifically, the REVARB is a deep state-space model based on RNNs. Note that these benchmark methods are learned in an offline mode or in a batch mode, which means that they are trained on sets multiple times and then applied to test sets.\nThe settings of all the methods were the same, where the number of inducing points or the number of random features was 20. The latent dimension was set to four. All the observations were normalized. The results of the best performer in terms of the Welch t-test are presented in bold numbers. Further information about our model including run time is provided in Table II ###reference_###. The training times were collected from a server with 12G RAM. The training time of our ensemble model depends on the number of candidate models, which was in our experiments, and these models were trained separately. If they are deployed in a distributed manner, the training time in Table II ###reference_### can be reduced by around 100 times. The benchmark methods, however, can only be implemented by multiple iterations and cannot be conducted in a distributed way. The training and test times of the other methods are not provided in the corresponding papers and are affected by the used number of iterations in the computations.\nAs benchmarks, we have chosen to use batch methods due to the lack of other sequential methods that operate under the assumption of unknown transition and observation functions, as described in our paper. In order to ensure a fair comparison, we utilized a test set with a small number of samples, specifically ranging from 100 to 500 samples. Opting for a smaller test set allows for a more gradual and less rapid change in the sequential method. The batch methods undergo multiple training iterations on the training set to ensure convergence of their parameters. By contrast, our method only exploited the training set once and thus, may have not achieved complete convergence by the end of the training process due to the small number of training samples. Even so, our method, RF-SSM, achieved the best performance on two data sets (Actuator and Furnace). It also was the second best performer on the data set Drive and the third best performer on the data set Ballbeam. All the results are shown in Table I ###reference_###. The numbers in parentheses are the standard deviations of the RMSEs among five runs with different seeds."
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"section_id": "7",
|
| 151 |
+
"parent_section_id": null,
|
| 152 |
+
"section_name": "VII Conclusions",
|
| 153 |
+
"text": "In this paper, we addressed the problem of sequential estimation of state-space models and deep state-space models using Gaussian process-based state-space modeling and Gaussian process-based deep state-space modeling. An important advantage of the considered methodology is that it relaxes the assumption of knowing the functions in the observation and state equations. We implemented the Gaussian processes by using random feature-based Gaussian processes. The inference method is based on the combination of particle filtering and Bayesian linear regression. We also proposed an ensemble of filters for tracking the latent processes. With several experiments, we demonstrated the performance of the proposed method in different settings including synthetic examples and five real data sets. Further, we compared the performance of our method with 6 other state-of-the-art methods. A limitation of our approach is that for too deep models, the method requires many more particles and a larger number of features. In future work, we will address the use of the variational Bayes approach to acquire a better set of random feature candidates so that we can reduce the number of features and particles. We will also consider applying more efficient sampling approaches."
|
| 154 |
+
}
|
| 155 |
+
],
|
| 156 |
+
"appendix": [],
|
| 157 |
+
"tables": {
|
| 158 |
+
"1": {
|
| 159 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>RMSEs and standard deviations</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S6.T1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S6.T1.1.1.1.2\">ONE-STEP-AHEAD,</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S6.T1.1.1.1.3\">MULTI-STEP-AHEAD, LATENT</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S6.T1.1.1.1.4\">MARKOVIAN STATE-SPACE</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S6.T1.1.2.2.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"2\" id=\"S6.T1.1.2.2.2\">AUTOREGRESSIVE</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"3\" id=\"S6.T1.1.2.2.3\">SPACE AUTOREGRESSIVE</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" colspan=\"3\" id=\"S6.T1.1.2.2.4\">MODELS</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.3.3\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_th_row\" id=\"S6.T1.1.3.3.1\">TASK</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.1.3.3.2\">GP-NARX</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.1.3.3.3\">NIGP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.1.3.3.4\">REVARB 1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.1.3.3.5\">REVARB 2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.1.3.3.6\">MSGP</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.1.3.3.7\">SS-GP-SSM</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.1.3.3.8\">PR-SSM</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S6.T1.1.3.3.9\">RF-SSM</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T1.1.4.1\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S6.T1.1.4.1.1\">ACTUATOR</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.1.4.1.2\">0.627</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.1.4.1.3\">0.599</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.1.4.1.4\">0.438</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.1.4.1.5\">0.613</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.1.4.1.6\">0.771</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.1.4.1.7\">0.696</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.1.4.1.8\">0.502</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S6.T1.1.4.1.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.4.1.9.1\">0.295</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.5.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S6.T1.1.5.2.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.5.2.2\">(0.005)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.5.2.3\">(0)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.5.2.4\">(0.049)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.5.2.5\">(0.190)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.5.2.6\">(0.098)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.5.2.7\">(0.034)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.5.2.8\">(0.031)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.5.2.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.5.2.9.1\">(0.037)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.6.3\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S6.T1.1.6.3.1\">BALLBEAM</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.6.3.2\">0.284</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.6.3.3\">0.087</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.6.3.4\">0.139</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.6.3.5\">0.209</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.6.3.6\">0.124</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.6.3.7\">411.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.6.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.6.3.8.1\">0.073</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.6.3.9\">0.107</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.7.4\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S6.T1.1.7.4.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.7.4.2\">(0.222)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.7.4.3\">(0)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.7.4.4\">(0.007)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.7.4.5\">(0.012)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.7.4.6\">(0.034)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.7.4.7\">(273.0)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.7.4.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.7.4.8.1\">(0.007)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.7.4.9\">(0.010)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.8.5\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S6.T1.1.8.5.1\">DRIVE</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.8.5.2\">0.701</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.8.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.8.5.3.1\">0.373</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.8.5.4\">0.828</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.8.5.5\">0.868</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.8.5.6\">0.451</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.8.5.7\">0.718</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.8.5.8\">0.492</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.8.5.9\">0.417</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.9.6\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S6.T1.1.9.6.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.9.6.2\">(0.015)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.9.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.9.6.3.1\">(0)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.9.6.4\">(0.025)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.9.6.5\">(0.113)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.9.6.6\">(0.021)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.9.6.7\">(0.009)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.9.6.8\">(0.038)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.9.6.9\">(0.030)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.10.7\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S6.T1.1.10.7.1\">FURNACE</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.10.7.2\">1.201</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.10.7.3\">1.205</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.10.7.4\">1.195</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.10.7.5\">1.188</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.10.7.6\">1.277</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.10.7.7\">1.318</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.10.7.8\">1.249</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.10.7.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.10.7.9.1\">0.410</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.11.8\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S6.T1.1.11.8.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.11.8.2\">(0.000)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.11.8.3\">(0)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.11.8.4\">(0.002)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.11.8.5\">(0.001)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.11.8.6\">(0.127)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.11.8.7\">(0.027)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.11.8.8\">(0.029)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.11.8.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.11.8.9.1\">(0.032)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.12.9\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S6.T1.1.12.9.1\">DRYER</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.12.9.2\">0.310</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.12.9.3\">0.268</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.12.9.4\">0.851</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.12.9.5\">0.355</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.12.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.12.9.6.1\">0.146</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.12.9.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.12.9.7.1\">0.152</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.12.9.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.12.9.8.1\">0.140</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T1.1.12.9.9\">0.273</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.1.13.10\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb\" id=\"S6.T1.1.13.10.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T1.1.13.10.2\">(0.044)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T1.1.13.10.3\">(0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T1.1.13.10.4\">(0.011)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T1.1.13.10.5\">(0.027)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T1.1.13.10.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.13.10.6.1\">(0.004)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T1.1.13.10.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.13.10.7.1\">(0.006)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T1.1.13.10.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T1.1.13.10.8.1\">(0.018)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T1.1.13.10.9\">(0.021)</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 160 |
+
"capture": "TABLE I: RMSEs and standard deviations"
|
| 161 |
+
},
|
| 162 |
+
"2": {
|
| 163 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Data Information</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T2.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T2.4.4\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T2.4.4.5\"><span class=\"ltx_text ltx_font_italic\" id=\"S6.T2.4.4.5.1\">Task</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T2.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T2.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S6.T2.4.4.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T2.4.5.1\">\n<td class=\"ltx_td ltx_align_right\" id=\"S6.T2.4.5.1.1\">ACTUATOR</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.5.1.2\">512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.5.1.3\">512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.5.1.4\">701.0 (3.847)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.5.1.5\">13.8 (0.748)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.6.2\">\n<td class=\"ltx_td ltx_align_right\" id=\"S6.T2.4.6.2.1\">BALLBEAM</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.6.2.2\">500</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.6.2.3\">500</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.6.2.4\">689.6 (15.318)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.6.2.5\">20.6 (1.744)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.7.3\">\n<td class=\"ltx_td ltx_align_right\" id=\"S6.T2.4.7.3.1\">DRIVE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.7.3.2\">250</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.7.3.3\">250</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.7.3.4\">307.6 (3.2)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.7.3.5\">4.2 (0.400)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.8.4\">\n<td class=\"ltx_td ltx_align_right\" id=\"S6.T2.4.8.4.1\">FURNACE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.8.4.2\">148</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.8.4.3\">148</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.8.4.4\">160.4 (0.490)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.8.4.5\">3.8 (0.400)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.4.9.5\">\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S6.T2.4.9.5.1\">DRYER</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T2.4.9.5.2\">500</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T2.4.9.5.3\">500</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T2.4.9.5.4\">686.0 (6.841)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S6.T2.4.9.5.5\">8.0 (0.00)</td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 164 |
+
"capture": "TABLE II: Data Information"
|
| 165 |
+
}
|
| 166 |
+
},
|
| 167 |
+
"image_paths": {
|
| 168 |
+
"3(a)": {
|
| 169 |
+
"figure_path": "2301.12528v2_figure_3(a).png",
|
| 170 |
+
"caption": "Figure 3: Point estimates and 95% confidence region of x^t[1]superscriptsubscript^\ud835\udc65\ud835\udc61delimited-[]1\\widehat{x}_{t}^{[1]}over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT and its true values xt[1]superscriptsubscript\ud835\udc65\ud835\udc61delimited-[]1x_{t}^{[1]}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT for the last 100 samples.",
|
| 171 |
+
"url": "http://arxiv.org/html/2301.12528v2/extracted/5491216/AQ/ci.png"
|
| 172 |
+
},
|
| 173 |
+
"3(b)": {
|
| 174 |
+
"figure_path": "2301.12528v2_figure_3(b).png",
|
| 175 |
+
"caption": "Figure 3: Point estimates and 95% confidence region of x^t[1]superscriptsubscript^\ud835\udc65\ud835\udc61delimited-[]1\\widehat{x}_{t}^{[1]}over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT and its true values xt[1]superscriptsubscript\ud835\udc65\ud835\udc61delimited-[]1x_{t}^{[1]}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT for the last 100 samples.",
|
| 176 |
+
"url": "http://arxiv.org/html/2301.12528v2/"
|
| 177 |
+
},
|
| 178 |
+
"4(a)": {
|
| 179 |
+
"figure_path": "2301.12528v2_figure_4(a).png",
|
| 180 |
+
"caption": "Figure 5: Pairs of estimated (\ud835\udc31^tsubscript^\ud835\udc31\ud835\udc61\\widehat{\\textbf{x}}_{t}over^ start_ARG x end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT) and actual (\ud835\udc31tsubscript\ud835\udc31\ud835\udc61\\textbf{x}_{t}x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT) latent states before SVD.",
|
| 181 |
+
"url": "http://arxiv.org/html/2301.12528v2/"
|
| 182 |
+
},
|
| 183 |
+
"4(b)": {
|
| 184 |
+
"figure_path": "2301.12528v2_figure_4(b).png",
|
| 185 |
+
"caption": "Figure 5: Pairs of estimated (\ud835\udc31^tsubscript^\ud835\udc31\ud835\udc61\\widehat{\\textbf{x}}_{t}over^ start_ARG x end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT) and actual (\ud835\udc31tsubscript\ud835\udc31\ud835\udc61\\textbf{x}_{t}x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT) latent states before SVD.",
|
| 186 |
+
"url": "http://arxiv.org/html/2301.12528v2/"
|
| 187 |
+
},
|
| 188 |
+
"5(a)": {
|
| 189 |
+
"figure_path": "2301.12528v2_figure_5(a).png",
|
| 190 |
+
"caption": "Figure 7: Point estimates of x^t[1]superscriptsubscript^\ud835\udc65\ud835\udc61delimited-[]1\\widehat{x}_{t}^{[1]}over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT with 95% confidence region and the true xt[1]superscriptsubscript\ud835\udc65\ud835\udc61delimited-[]1x_{t}^{[1]}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT from T0subscript\ud835\udc470T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT to T0+100subscript\ud835\udc470100T_{0}+100italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 100.",
|
| 191 |
+
"url": "http://arxiv.org/html/2301.12528v2/extracted/5491216/AQ/result1_x1_100.png"
|
| 192 |
+
},
|
| 193 |
+
"5(b)": {
|
| 194 |
+
"figure_path": "2301.12528v2_figure_5(b).png",
|
| 195 |
+
"caption": "Figure 7: Point estimates of x^t[1]superscriptsubscript^\ud835\udc65\ud835\udc61delimited-[]1\\widehat{x}_{t}^{[1]}over^ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT with 95% confidence region and the true xt[1]superscriptsubscript\ud835\udc65\ud835\udc61delimited-[]1x_{t}^{[1]}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT [ 1 ] end_POSTSUPERSCRIPT from T0subscript\ud835\udc470T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT to T0+100subscript\ud835\udc470100T_{0}+100italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 100.",
|
| 196 |
+
"url": "http://arxiv.org/html/2301.12528v2/"
|
| 197 |
+
},
|
| 198 |
+
"6(a)": {
|
| 199 |
+
"figure_path": "2301.12528v2_figure_6(a).png",
|
| 200 |
+
"caption": "Figure 9: RMSEs after T0subscript\ud835\udc470T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. The x-axis represents time (seconds).",
|
| 201 |
+
"url": "http://arxiv.org/html/2301.12528v2/"
|
| 202 |
+
},
|
| 203 |
+
"6(b)": {
|
| 204 |
+
"figure_path": "2301.12528v2_figure_6(b).png",
|
| 205 |
+
"caption": "Figure 9: RMSEs after T0subscript\ud835\udc470T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. The x-axis represents time (seconds).",
|
| 206 |
+
"url": "http://arxiv.org/html/2301.12528v2/"
|
| 207 |
+
},
|
| 208 |
+
"7": {
|
| 209 |
+
"figure_path": "2301.12528v2_figure_7.png",
|
| 210 |
+
"caption": "Figure 11: Estimated latent processes \ud835\udc31^tsubscript^\ud835\udc31\ud835\udc61\\widehat{\\textbf{x}}_{t}over^ start_ARG x end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and actual latent processes \ud835\udc31tsubscript\ud835\udc31\ud835\udc61\\textbf{x}_{t}x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT for all five dimensions.",
|
| 211 |
+
"url": "http://arxiv.org/html/2301.12528v2/"
|
| 212 |
+
},
|
| 213 |
+
"8": {
|
| 214 |
+
"figure_path": "2301.12528v2_figure_8.png",
|
| 215 |
+
"caption": "Figure 12: RMSEs obtained from a single-hidden-layer GP-SSM to a four-hidden-layer GP-DSSM.",
|
| 216 |
+
"url": "http://arxiv.org/html/2301.12528v2/"
|
| 217 |
+
},
|
| 218 |
+
"9": {
|
| 219 |
+
"figure_path": "2301.12528v2_figure_9.png",
|
| 220 |
+
"caption": "Figure 13: On the left are the true values and estimates of \ud835\udc311,tsubscript\ud835\udc311\ud835\udc61\\textbf{x}_{1,t}x start_POSTSUBSCRIPT 1 , italic_t end_POSTSUBSCRIPT, and on the right, the true values and estimates of \ud835\udc312,tsubscript\ud835\udc312\ud835\udc61\\textbf{x}_{2,t}x start_POSTSUBSCRIPT 2 , italic_t end_POSTSUBSCRIPT.",
|
| 221 |
+
"url": "http://arxiv.org/html/2301.12528v2/"
|
| 222 |
+
}
|
| 223 |
+
},
|
| 224 |
+
"validation": true,
|
| 225 |
+
"references": [],
|
| 226 |
+
"url": "http://arxiv.org/html/2301.12528v2"
|
| 227 |
+
}
|
20240323/2302.10681v4.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2303.01656v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2304.00263v2.json
ADDED
|
@@ -0,0 +1,121 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "On the impact of regularization in data-driven predictive control",
|
| 3 |
+
"abstract": "Model predictive control (MPC) is a control strategy widely used in industrial applications. However, its implementation typically requires a mathematical model of the system being controlled, which can be a time-consuming and expensive task. Data-driven predictive control (DDPC) methods offer an alternative approach that does not require an explicit mathematical model, but instead optimize the control policy directly from data. In this paper, we study the impact of two different regularization penalties on the closed-loop performance of a recently introduced data-driven method called -DDPC. Moreover, we discuss the tuning of the related coefficients in different data and noise scenarios, to provide some guidelines for the end user.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Model predictive control (MPC) is a popular control strategy that has been successfully applied in a wide range of applications [1 ###reference_b1###]. However, a major limitation of MPC is that it requires a mathematical model of the system being controlled, which can be a costly and time-consuming task. This requirement has led to the development of data-driven predictive control (DDPC) methods, which aim to learn the control policy directly from data without the need for a mathematical model of the plant [2 ###reference_b2###] [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###].\nNonetheless, the data-based predictor used in DDPC is not exempt from shortcomings, due to the presence of noise on the measured data. Therefore, different techniques have been proposed to make the closed-loop performance less sensitive to such a noise, e.g., robust design in case hard power bounds are given [3 ###reference_b3###], dynamic mode decomposition [6 ###reference_b6###] and regularization [7 ###reference_b7###]. The latter in particular can be used to prevent the data-based predictor to overfit the historical data, by tuning a few penalty coefficients. In the pioneering work [7 ###reference_b7###], the design of such terms is discussed for different kinds of regularization, and the authors highlight the significant efforts required in terms of trial-and-error tuning, especially as far as some specific parameters are concerned. In [8 ###reference_b8###], we showed that regularization may be avoided in case the data set is large enough and the DDPC problem is reformulated thanks to subspace identification tools, so as to shrink the number of decision variables, into the so-called -DDPC method. Finally, in [9 ###reference_b9###], we have focused on finite size data sets and used asymptotic arguments to show that regularization might instead be useful to counteract the prediction error variance, due to the use of noisy data in the predictor. Two different regularization options have been introduced, and an on-line tuning of the associated penalizations has been proposed, based on the prior knowledge of the variance expression.\nThis paper\u2019s contribution is built upon [9 ###reference_b9###], since our goal here is to analyze the joint tuning of the two regularization terms of -DDPC and analyze their impact on the closed-loop performance. In particular, we shall discuss the role of the driving input color (spectra) and some qualitative guidelines about regularization design will be drawn by means of extensive simulations on a benchmark linear system as well as on a challenging nonlinear problem, namely, wheel slip control in braking maneuvering. Finally, offline and on-line regularization tuning will be compared.\nThe remainder of the paper is as follows. In Section II ###reference_###, the predictive control problem setting is described, and the regularization tuning issue is mathematically formulated. Section III ###reference_### illustrates the considered regularization techniques for -DDPC and discusses the role of each term, also by means of two numerical case studies. The paper is ended by some concluding remarks.\nNotation. Given a signal (say ), the associated (block) Hankel matrix is defined as:\nwhile we use the shorthand to denote a single (block) row Hankel, namely:"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Problem setting",
|
| 15 |
+
"text": "Our goal is to design a controller for an\nunknown plant that can be modeled by linear time-invariant (LTI) discrete-time linear (stochastic) system . Without loss of generality, we consider its state space description in\ninnovation form\nwhere , and are the state, input and innovation process respectively, while is the corresponding output signal.\nUnder the unrealistic assumption that the system matrices are known, the predictive constrained tracking control problem of interest for this paper (for a given reference and a prediction horizon ) can be formulated as follows\nwhere the penalties and , with and , are selected to trade-off between tracking performance and control effort.\nA standard assumption in data-driven control is that the system matrices are not known, and only a finite sequence of input/output data . We would like to stress that in our framework measured data are by assumption noisy, in the sense that there is no LTI system that, with the given input , produces exactly the measured output in a deterministic way.\nIn this paper, we follow that data-driven predictive control problem formulation provided in [10 ###reference_b10###, 9 ###reference_b9###], and we refer to those papers for a connection with the recent related literature such as [7 ###reference_b7###, 3 ###reference_b3###].\nTo this purpose, we need to introduce the Hankel matrices, including past and future values of inputs and outputs, with respect to time . In particular, with obvious use of the subscripts and , we define:\nwhere and is the \u201cpast horizon\u201d.\nBased on (3 ###reference_###) the Hankel can be written as\nLet us now define as the joint input/output process\nwith the associated Hankel matrix being . The orthogonal projection of onto the row space of and turns out to be given by\nwhere the last term vanishes111For a more formal statement on this, we refer the reader to standard literature on subspace identification. (in probability) as .\nThis means that, when the matrices are unknown, future outputs can still be predicted directly from data. In fact, given any (past) joint input and output trajectory and future control inputs\nthe prediction of future outputs\nbased on past inputs and future inputs \ncan be obtained from222Conditions on for this to hold are provided in [10 ###reference_b10###].\nwith to be optimized as in, e.g., [3 ###reference_b3###], [10 ###reference_b10###], [11 ###reference_b11###].\nFollowing subspace identification [12 ###reference_b12###] ideas, the orthogonal projection (8 ###reference_###) can be written exploiting the LQ decomposition of the data matrices. In particular, let us define\nwhere the matrices are all non-singular and have orthonormal rows, i.e., , for , , . The orthogonal projection\n(8 ###reference_###) can be written in the form:\nWith this notation, following the same rationale of [10 ###reference_b10###, 9 ###reference_b9###], we can further reformulate (11 ###reference_###) as:\nand the parameters\nbecome the new decision variables. In addition, in [9 ###reference_b9###] it was suggested to add a (slack) optimization variable to model the projection error in (8 ###reference_###) and avoid overfitting. In particular, the prediction (with slack) can be written as:\nWe refer the reader to [9 ###reference_b9###] for a sound statistical motivation of this particular expression of the slack . In particular, since is generically of full rank, constraints/regularization should be imposed on the slack optimization variable .\nA data-driven predictive controller with the same objectives and constraints of (II ###reference_###) can be formulated as follows [10 ###reference_b10###]\nwith\nand\nwhere is defined as in (9 ###reference_###) and the choice of straightforwardly follows from the initial conditions (showing the advantages of using instead of as the decision vector).\nThe purpose of this paper is to study the design and impact of the regularization term within a noisy stochastic environment, and provide the end user with useful hints on how to tune such a penalty term."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III The role of regularization",
|
| 21 |
+
"text": "In [9 ###reference_b9###], it has been argued that the average variance of the error on the future output predictions due to the finite data projection errors in (8 ###reference_###), is proportional to . Since, in the optimization problem (II ###reference_###), is determined by the initial conditions, it only remains to regularize so as to avoid an (unnecessarily) high variance on the predictor and, therefore, poor control performance. In this paper, we consider also an alternative regularization term that penalizes directly the control input effort (in addition to the control penalty already embedded in the control cost), and discuss its relation with regularization on . Differently from [9 ###reference_b9###], we consider this jointly with presence of a slack variable and thus a related regularization. These considerations lead to the following two forms of the regularization term in\n(II ###reference_###):\nRegularization on and slack\nRegularization on input and slack\nwhere are hyper-parameters to be determined."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "III-A Theoretical analysis",
|
| 27 |
+
"text": "We first state a Theorem the establishes the connection between (18 ###reference_###) and ((b) ###reference_###).\nIf the training input sequence in the Hankel matrices and is (zero mean) white with variance , the regularization terms in (18 ###reference_###) and in ((b) ###reference_###) are asymptotically (in ) equivalent up to a rescaling of the weight .\nUnder the assumption that is white noise, then the future inputs are uncorrelated with past input and output data, so that the projection of on the joint past tends to zero as , more precisely\nSince , it follows that . In addition, since is white, its sample covariance matrix converges to , i.e.\nEquations (20 ###reference_###) and (21 ###reference_###) imply that, asymptotically in , and . Therefore we have:\nshowing that, up to the rescaling of the weight , this is equivalent to \n\u220e\nThis result has two important implications:\nWhen the (training) input is white, regularization on is equivalent to a penalty on the future input energy, which is typically present in the control cost. As such, we can argue that, in this case, the control cost has an indirect but important effect in counteracting the effect of the noise variance in the predictor.\nWhen the training input is not white, the control energy cost is not equivalent to penalizing the norm of , which on the other hand should be penalized to limit the effect of noise variance. The simulation results in the next section indeed confirm that, when noise input is not white, regularization on (i.e. ) has to be included."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "III-B Experimental analysis",
|
| 33 |
+
"text": "In this section we shall illustrate, exploiting two numerical examples (one linear and one nonlinear333By working in a specific operating regime, the control of an unknown nonlinear system can be tackled as that of an uncertain linear system.), the role of different regularization terms in the optimal control problem (II ###reference_###). In particular, following the rationale proposed in [7 ###reference_b7###], we evaluate the closed-loop performance over feedback steps as measured by the performance index:\n###figure_1### ###figure_2### ###figure_3###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.2.1",
|
| 37 |
+
"parent_section_id": "3.2",
|
| 38 |
+
"section_name": "III-B1 Benchmark LTI system",
|
| 39 |
+
"text": "Consider the SISO, -th order system in [13 ###reference_b13###] ( in the sequel) with a prediction horizon . To assess the impact of the training data on closed-loop performance, we consider four data sets of two different lengths (either or ), obtained either with white noise input (denoted with ) or with a low-pass (obtained filtering white noise with a discrete-time low-pass filter with cut-off angular frequency ) input sequence (denoted with ).\nWhite noise is added to the output to guarantee a signal-to-noise ratio of dB.\nThe data-driven optimal control problem (II ###reference_###) is solved\nfixing and ,\nsetting the output reference and the input reference , with . The two different regularization strategies discussed in Section III ###reference_### are denoted with the shorthands and for (18 ###reference_###) and ((b) ###reference_###), respectively.\nThe \u201coracle\u201d value of leading to the minimum cost (23 ###reference_###), here denoted as to account for the different data set lengths, input and regularization strategies, are searched over a rectangular logarithmic-spaced grid with points per decade, so that .\nWe perform Monte Carlo experiments444The past horizon is determined using Akaike\u2019s information criterion. (i.e. different training data sets with the output of corrupted by white noise) to tune and \nfor all the four considered training scenarios and possible regularization. For each Monte Carlo run, and for each set of possible parameters in the grid , the closed loop performance index (23 ###reference_###) is computed by averaging over closed loop experiments (all with the same control law but different closed measurements errors) the corresponding performance index , i.e.\nThe optimal values of and over the grid is obtained by finding the minimum .\nThe results are reported in Fig. 1 ###reference_###, based on which we can make the following general considerations:\nAs expected based on Theorem 1 ###reference_1###, when the input is white noise, the two types of regularization provide the same performance (minor differences for are due to sample variability).\nThe \u201coptimal\u201d (oracle) closed loop performance obtained with the two different regularization strategies differ when the input is not white. In particular, the penalty (18 ###reference_###) that acts directly on and thus controls the predictor variance provides the best performance, particularly so for small data sets where the effect of noise has more impact.\nBased on the comparison between Fig.1 ###reference_###(a) and Fig. 1 ###reference_###(b) (in which is constrained to zero), we can observe that the impact of is significant in the low-data regime (equivalent to large noise in the predictor), whereas for larger data sets its impact can be neglected and the optimal performances exploiting only match those obtained optimizing jointly and .\nThe location of the minimum points (see also Fig. 1(c) ###reference_sf3###) is different depending on the employed training signal, especially in the low data regime; indeed, preference is given to exploiting the regularization term on (and in fact in most cases).\nIn light of the above observations, the general validity of (II ###reference_###) constrained to either or devised in [9 ###reference_b9###] is strengthened, as different types of data set are given. The effectiveness of these -DDPC schemes is also reinforced since it is evident that, in most cases, the operative tuning of either the sole parameter or the sole parameter is worth to be carried out in practice.\n###figure_4### ###figure_5### ###figure_6###"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2.2",
|
| 43 |
+
"parent_section_id": "3.2",
|
| 44 |
+
"section_name": "III-B2 Wheel slip control problem",
|
| 45 |
+
"text": "###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### We now consider the problem of designing a wheel slip controller, steering the vehicle slip to a constant target value . The design is carried out by focusing on quasi-stationary operating condition (the parameters of the vehicle, its velocity and the road profile are assumed to be constant). In both data collection and closed-loop testing, the behavior of the braking system (from now on indicated as ) is simulated based on the nonlinear model in [14 ###reference_b14###]:\nWe indicate with [Nm] the controllable braking torque and set the system parameters to the same values used in [15 ###reference_b15###]. Although this dynamics is clearly nonlinear, it is possible to identify two main operating regions of the system555In this case, these two regions are limited by the slip , for slip values lower than ; while it becomes unstable for higher slips., where the behavior of the slip can be approximated as linear. To comply with our framework (see (3 ###reference_###)), we thus consider both data collection and simulation tests where the vehicle generally operates in a low-slip regime. Accordingly, data are gathered by performing closed-loop experiments with the benchmark controller introduced in [16 ###reference_b16###], selecting a slip reference uniformly chosen at random in the interval and collecting data at a sampling rate of [Hz].\nIn particular, the output of the employed training data set shown in Fig. 2(a) ###reference_sf1###\nis generated by exploiting a closed-loop experiment wherein the output is corrupted by a zero-mean white noise process with variance and, also, zero-mean white noise with variance is added to the input provided by the controller.\nMeanwhile, the reference slip for the closed-loop tests is , corresponding to a reference braking torque [Nm]. To improve the tracking performance in closed-loop, apart from the terms weighting the tracking error and the difference between the predicted and reference torque, respectively weighted by and , the cost of the -DDPC problem (15a ###reference_.1###) is augmented with a term penalizing abrupt variations of the input (weighted as ), a term penalizing the integral of the tracking error (weighted as ), and two terms further penalizing the difference between the slip and torque references and their actual value over the last step of the prediction horizon (weighted as and , respectively). The following constraint is also added at each feedback step for :\nto account for the known dynamics of the integrator. Nonetheless, performances are still assessed via the index in (23 ###reference_###) over a closed-loop test of steps. A Monte Carlo campaign with iterations is run on the above setup, corrupting the output of with a white noise having signal-to-noise ratio dB. For each of the tests, the regularization parameters and are both selected from a grid over comprising of logarithmic-spaced points. For the joint optimization, the squared grid composed by the optimal values obtained via offline -DDPC is instead taken into account.\nFig. 2(b) ###reference_sf2### depicts the distributions of the performance index in (23 ###reference_###) as the selected regularization strategy varies considering tuned either offline or online and comparing -DDPC with a MPC-based oracle (see also Fig. 2(c) ###reference_sf3###). In particular, the input-output trajectories of all -DDPC strategies can be summarized in Fig. 3 ###reference_###. Although the MPC-based oracle displays evident preeminence, it is worthwhile to appreciate that all these trajectories are characterized by solid performances (rise time of at most steps, settling time of about steps, maximum overshoot of or less with no cross into the unstable region), with such traits indicating that -DDPC schemes remain competitive even in nonlinear scenarios. Noticeably, the online strategies (implementable on real applications) shown in Fig. 3(e) ###reference_sf5###-3(f) ###reference_sf6### share similar performances with the corresponding offline strategies in Fig. 3(b) ###reference_sf2### - 3(c) ###reference_sf3###, especially that relying on the online tuning of parameter . Moreover, within the setup of this numerical example, one observes that the performance of the offline strategy based on strictly matches with that of the scheme lacking of regularization (Fig. 3(a) ###reference_sf1###); whereas, the performance of the offline strategy based on strictly matches that of the scheme in which a joint optimization of both and (Fig. 3(d) ###reference_sf4###) is carried out. Hence, under this setup and with the data collected in this numerical example, it emerges once again that the optimal tuning based on the sole penalty parameter (i.e., setting ) can be considered in practice for high-data regimes. This, in turn, may lead to significantly diminish the computational burden associated to the tuning of the penalty parameters whenever a real implementation based on the proposed regularized scheme (II ###reference_###) is considered and a big training data set is available.\nThe above comparison further highlights that regularized DDPC approaches can be competitive w.r.t. traditional model-based controllers and that -DDPC solution with the online tuning proposed in [9 ###reference_b9###] can be robustly effective also when dealing with nonlinear systems."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "IV Concluding remarks and future directions",
|
| 51 |
+
"text": "Several regularization strategies for Data Driven Predictive Control (-DDPC) have been discussed and evaluated in terms of closed-loop performance. It has been proved that when the input is white, regularizing and penalizing control energy are equivalent. Numerical examples further illustrate that the tuning of the penalty parameters in the -DDPC can be decoupled without dramatically impacting the performance corresponding to a (more costly) joint regularization wherein both and are accounted for.\nFuture work will be devoted to the extensive experimental assessment of the considered regularization strategies, as well as to a theoretical analysis of the optimization of the sole ."
|
| 52 |
+
}
|
| 53 |
+
],
|
| 54 |
+
"appendix": [],
|
| 55 |
+
"tables": {},
|
| 56 |
+
"image_paths": {
|
| 57 |
+
"1(a)": {
|
| 58 |
+
"figure_path": "2304.00263v2_figure_1(a).png",
|
| 59 |
+
"caption": "(a) Performance indexes\nFigure 1: (a): comparison between the Kalman-filter-based oracle performance JO\u2062Rsubscript\ud835\udc3d\ud835\udc42\ud835\udc45J_{OR}italic_J start_POSTSUBSCRIPT italic_O italic_R end_POSTSUBSCRIPT and the minimum cost realizations J^ns,r\u2062gNd\u2062a\u2062t\u2062asubscriptsuperscript^\ud835\udc3dsubscript\ud835\udc41\ud835\udc51\ud835\udc4e\ud835\udc61\ud835\udc4esubscript\ud835\udc5b\ud835\udc60\ud835\udc5f\ud835\udc54\\widehat{J}^{N_{data}}_{n_{s},rg}over^ start_ARG italic_J end_ARG start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_d italic_a italic_t italic_a end_POSTSUBSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_r italic_g end_POSTSUBSCRIPT for \u03a3Lsubscript\u03a3\ud835\udc3f\\Sigma_{L}roman_\u03a3 start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT over 100100100100 Monte Carlo runs;\n(b): Optimal performance under the constraint \u03b22=0subscript\ud835\udefd20\\beta_{2}=0italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0; (c): distribution of the corresponding minimizers (\u03b22\u22c6,\u03b23\u22c6)superscriptsubscript\ud835\udefd2\u22c6superscriptsubscript\ud835\udefd3\u22c6(\\beta_{2}^{\\star},\\beta_{3}^{\\star})( italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT ).",
|
| 60 |
+
"url": "http://arxiv.org/html/2304.00263v2/x1.png"
|
| 61 |
+
},
|
| 62 |
+
"1(b)": {
|
| 63 |
+
"figure_path": "2304.00263v2_figure_1(b).png",
|
| 64 |
+
"caption": "(b) Optimal performance under constraint \u03b22=0subscript\ud835\udefd20\\beta_{2}=0italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0\nFigure 1: (a): comparison between the Kalman-filter-based oracle performance JO\u2062Rsubscript\ud835\udc3d\ud835\udc42\ud835\udc45J_{OR}italic_J start_POSTSUBSCRIPT italic_O italic_R end_POSTSUBSCRIPT and the minimum cost realizations J^ns,r\u2062gNd\u2062a\u2062t\u2062asubscriptsuperscript^\ud835\udc3dsubscript\ud835\udc41\ud835\udc51\ud835\udc4e\ud835\udc61\ud835\udc4esubscript\ud835\udc5b\ud835\udc60\ud835\udc5f\ud835\udc54\\widehat{J}^{N_{data}}_{n_{s},rg}over^ start_ARG italic_J end_ARG start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_d italic_a italic_t italic_a end_POSTSUBSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_r italic_g end_POSTSUBSCRIPT for \u03a3Lsubscript\u03a3\ud835\udc3f\\Sigma_{L}roman_\u03a3 start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT over 100100100100 Monte Carlo runs;\n(b): Optimal performance under the constraint \u03b22=0subscript\ud835\udefd20\\beta_{2}=0italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0; (c): distribution of the corresponding minimizers (\u03b22\u22c6,\u03b23\u22c6)superscriptsubscript\ud835\udefd2\u22c6superscriptsubscript\ud835\udefd3\u22c6(\\beta_{2}^{\\star},\\beta_{3}^{\\star})( italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT ).",
|
| 65 |
+
"url": "http://arxiv.org/html/2304.00263v2/x2.png"
|
| 66 |
+
},
|
| 67 |
+
"1(c)": {
|
| 68 |
+
"figure_path": "2304.00263v2_figure_1(c).png",
|
| 69 |
+
"caption": "(c) Corresponding minimizers\nFigure 1: (a): comparison between the Kalman-filter-based oracle performance JO\u2062Rsubscript\ud835\udc3d\ud835\udc42\ud835\udc45J_{OR}italic_J start_POSTSUBSCRIPT italic_O italic_R end_POSTSUBSCRIPT and the minimum cost realizations J^ns,r\u2062gNd\u2062a\u2062t\u2062asubscriptsuperscript^\ud835\udc3dsubscript\ud835\udc41\ud835\udc51\ud835\udc4e\ud835\udc61\ud835\udc4esubscript\ud835\udc5b\ud835\udc60\ud835\udc5f\ud835\udc54\\widehat{J}^{N_{data}}_{n_{s},rg}over^ start_ARG italic_J end_ARG start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_d italic_a italic_t italic_a end_POSTSUBSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_r italic_g end_POSTSUBSCRIPT for \u03a3Lsubscript\u03a3\ud835\udc3f\\Sigma_{L}roman_\u03a3 start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT over 100100100100 Monte Carlo runs;\n(b): Optimal performance under the constraint \u03b22=0subscript\ud835\udefd20\\beta_{2}=0italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0; (c): distribution of the corresponding minimizers (\u03b22\u22c6,\u03b23\u22c6)superscriptsubscript\ud835\udefd2\u22c6superscriptsubscript\ud835\udefd3\u22c6(\\beta_{2}^{\\star},\\beta_{3}^{\\star})( italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT ).",
|
| 70 |
+
"url": "http://arxiv.org/html/2304.00263v2/x3.png"
|
| 71 |
+
},
|
| 72 |
+
"2(a)": {
|
| 73 |
+
"figure_path": "2304.00263v2_figure_2(a).png",
|
| 74 |
+
"caption": "(a) Training data set of size 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT\nFigure 2: (a): training data set employed for all the \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC experiments on the wheel slip control problem.\n(b): comparison of the performance indexes obtained with different \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC strategies (bar and hat notation indicating offline and online approaches respectively) and a model-based oracle. The subscript a\u2208{0,2,3,23}\ud835\udc4e02323a\\in\\{0,2,3,23\\}italic_a \u2208 { 0 , 2 , 3 , 23 } on J\ud835\udc3dJitalic_J refers to the regularization scheme (respectively: no regularization, \u03b22subscript\ud835\udefd2\\beta_{2}italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, \u03b23subscript\ud835\udefd3\\beta_{3}italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT or both);\n(c): input/output tracking obtained from an MPC-based oracle. Mean (line) and 1.951.951.951.95 times the standard deviation (shaded area) of the closed-loop input/output trajectories; the reference input and output are indicated with black dashed lines.",
|
| 75 |
+
"url": "http://arxiv.org/html/2304.00263v2/x4.png"
|
| 76 |
+
},
|
| 77 |
+
"2(b)": {
|
| 78 |
+
"figure_path": "2304.00263v2_figure_2(b).png",
|
| 79 |
+
"caption": "(b) Performance indexes\nFigure 2: (a): training data set employed for all the \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC experiments on the wheel slip control problem.\n(b): comparison of the performance indexes obtained with different \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC strategies (bar and hat notation indicating offline and online approaches respectively) and a model-based oracle. The subscript a\u2208{0,2,3,23}\ud835\udc4e02323a\\in\\{0,2,3,23\\}italic_a \u2208 { 0 , 2 , 3 , 23 } on J\ud835\udc3dJitalic_J refers to the regularization scheme (respectively: no regularization, \u03b22subscript\ud835\udefd2\\beta_{2}italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, \u03b23subscript\ud835\udefd3\\beta_{3}italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT or both);\n(c): input/output tracking obtained from an MPC-based oracle. Mean (line) and 1.951.951.951.95 times the standard deviation (shaded area) of the closed-loop input/output trajectories; the reference input and output are indicated with black dashed lines.",
|
| 80 |
+
"url": "http://arxiv.org/html/2304.00263v2/x5.png"
|
| 81 |
+
},
|
| 82 |
+
"2(c)": {
|
| 83 |
+
"figure_path": "2304.00263v2_figure_2(c).png",
|
| 84 |
+
"caption": "(c) MPC-based oracle\nFigure 2: (a): training data set employed for all the \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC experiments on the wheel slip control problem.\n(b): comparison of the performance indexes obtained with different \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC strategies (bar and hat notation indicating offline and online approaches respectively) and a model-based oracle. The subscript a\u2208{0,2,3,23}\ud835\udc4e02323a\\in\\{0,2,3,23\\}italic_a \u2208 { 0 , 2 , 3 , 23 } on J\ud835\udc3dJitalic_J refers to the regularization scheme (respectively: no regularization, \u03b22subscript\ud835\udefd2\\beta_{2}italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, \u03b23subscript\ud835\udefd3\\beta_{3}italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT or both);\n(c): input/output tracking obtained from an MPC-based oracle. Mean (line) and 1.951.951.951.95 times the standard deviation (shaded area) of the closed-loop input/output trajectories; the reference input and output are indicated with black dashed lines.",
|
| 85 |
+
"url": "http://arxiv.org/html/2304.00263v2/x6.png"
|
| 86 |
+
},
|
| 87 |
+
"3(a)": {
|
| 88 |
+
"figure_path": "2304.00263v2_figure_3(a).png",
|
| 89 |
+
"caption": "(a) J\u00af0subscript\u00af\ud835\udc3d0\\bar{J}_{0}over\u00af start_ARG italic_J end_ARG start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT: (\u03b22,\u03b23)=(0,+\u221e)subscript\ud835\udefd2subscript\ud835\udefd30(\\beta_{2},\\beta_{3})=(0,+\\infty)( italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) = ( 0 , + \u221e )\nFigure 3: For all diagrams: mean (line) and 1.951.951.951.95 times the standard deviation (shaded area) of the closed-loop input/output trajectories; the reference input and output are indicated with black dashed lines.\n(a): \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC with no regularization;\n(b)-(c): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately;\n(d): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT jointly;\n(e)-(f): online regularization strategies employing \u03b2^2subscript^\ud835\udefd2\\hat{\\beta}_{2}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2^3subscript^\ud835\udefd3\\hat{\\beta}_{3}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately.",
|
| 90 |
+
"url": "http://arxiv.org/html/2304.00263v2/x7.png"
|
| 91 |
+
},
|
| 92 |
+
"3(b)": {
|
| 93 |
+
"figure_path": "2304.00263v2_figure_3(b).png",
|
| 94 |
+
"caption": "(b) J\u00af2subscript\u00af\ud835\udc3d2\\bar{J}_{2}over\u00af start_ARG italic_J end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT: (\u03b22,\u03b23)=(\u03b2\u00af2,+\u221e)subscript\ud835\udefd2subscript\ud835\udefd3subscript\u00af\ud835\udefd2(\\beta_{2},\\beta_{3})=(\\bar{\\beta}_{2},+\\infty)( italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) = ( over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , + \u221e )\nFigure 3: For all diagrams: mean (line) and 1.951.951.951.95 times the standard deviation (shaded area) of the closed-loop input/output trajectories; the reference input and output are indicated with black dashed lines.\n(a): \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC with no regularization;\n(b)-(c): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately;\n(d): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT jointly;\n(e)-(f): online regularization strategies employing \u03b2^2subscript^\ud835\udefd2\\hat{\\beta}_{2}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2^3subscript^\ud835\udefd3\\hat{\\beta}_{3}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately.",
|
| 95 |
+
"url": "http://arxiv.org/html/2304.00263v2/x8.png"
|
| 96 |
+
},
|
| 97 |
+
"3(c)": {
|
| 98 |
+
"figure_path": "2304.00263v2_figure_3(c).png",
|
| 99 |
+
"caption": "(c) J\u00af3subscript\u00af\ud835\udc3d3\\bar{J}_{3}over\u00af start_ARG italic_J end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT: (\u03b22,\u03b23)=(0,\u03b2\u00af3(\\beta_{2},\\beta_{3})=(0,\\bar{\\beta}_{3}( italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) = ( 0 , over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT)\nFigure 3: For all diagrams: mean (line) and 1.951.951.951.95 times the standard deviation (shaded area) of the closed-loop input/output trajectories; the reference input and output are indicated with black dashed lines.\n(a): \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC with no regularization;\n(b)-(c): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately;\n(d): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT jointly;\n(e)-(f): online regularization strategies employing \u03b2^2subscript^\ud835\udefd2\\hat{\\beta}_{2}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2^3subscript^\ud835\udefd3\\hat{\\beta}_{3}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately.",
|
| 100 |
+
"url": "http://arxiv.org/html/2304.00263v2/x9.png"
|
| 101 |
+
},
|
| 102 |
+
"3(d)": {
|
| 103 |
+
"figure_path": "2304.00263v2_figure_3(d).png",
|
| 104 |
+
"caption": "(d) J\u00af23subscript\u00af\ud835\udc3d23\\bar{J}_{23}over\u00af start_ARG italic_J end_ARG start_POSTSUBSCRIPT 23 end_POSTSUBSCRIPT: (\u03b22,\u03b23)=(\u03b22\u22c6,\u03b23\u22c6)subscript\ud835\udefd2subscript\ud835\udefd3subscriptsuperscript\ud835\udefd\u22c62subscriptsuperscript\ud835\udefd\u22c63(\\beta_{2},\\beta_{3})=(\\beta^{\\star}_{2},\\beta^{\\star}_{3})( italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) = ( italic_\u03b2 start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT )\nFigure 3: For all diagrams: mean (line) and 1.951.951.951.95 times the standard deviation (shaded area) of the closed-loop input/output trajectories; the reference input and output are indicated with black dashed lines.\n(a): \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC with no regularization;\n(b)-(c): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately;\n(d): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT jointly;\n(e)-(f): online regularization strategies employing \u03b2^2subscript^\ud835\udefd2\\hat{\\beta}_{2}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2^3subscript^\ud835\udefd3\\hat{\\beta}_{3}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately.",
|
| 105 |
+
"url": "http://arxiv.org/html/2304.00263v2/x10.png"
|
| 106 |
+
},
|
| 107 |
+
"3(e)": {
|
| 108 |
+
"figure_path": "2304.00263v2_figure_3(e).png",
|
| 109 |
+
"caption": "(e) J^2subscript^\ud835\udc3d2\\hat{J}_{2}over^ start_ARG italic_J end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT: (\u03b22,\u03b23)=(\u03b2^2,+\u221e)subscript\ud835\udefd2subscript\ud835\udefd3subscript^\ud835\udefd2(\\beta_{2},\\beta_{3})=(\\hat{\\beta}_{2},+\\infty)( italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) = ( over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , + \u221e )\nFigure 3: For all diagrams: mean (line) and 1.951.951.951.95 times the standard deviation (shaded area) of the closed-loop input/output trajectories; the reference input and output are indicated with black dashed lines.\n(a): \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC with no regularization;\n(b)-(c): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately;\n(d): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT jointly;\n(e)-(f): online regularization strategies employing \u03b2^2subscript^\ud835\udefd2\\hat{\\beta}_{2}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2^3subscript^\ud835\udefd3\\hat{\\beta}_{3}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately.",
|
| 110 |
+
"url": "http://arxiv.org/html/2304.00263v2/x11.png"
|
| 111 |
+
},
|
| 112 |
+
"3(f)": {
|
| 113 |
+
"figure_path": "2304.00263v2_figure_3(f).png",
|
| 114 |
+
"caption": "(f) J^3subscript^\ud835\udc3d3\\hat{J}_{3}over^ start_ARG italic_J end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT: (\u03b22,\u03b23)=(0,\u03b2^3)subscript\ud835\udefd2subscript\ud835\udefd30subscript^\ud835\udefd3(\\beta_{2},\\beta_{3})=(0,\\hat{\\beta}_{3})( italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ) = ( 0 , over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT )\nFigure 3: For all diagrams: mean (line) and 1.951.951.951.95 times the standard deviation (shaded area) of the closed-loop input/output trajectories; the reference input and output are indicated with black dashed lines.\n(a): \u03b3\ud835\udefe\\gammaitalic_\u03b3-DDPC with no regularization;\n(b)-(c): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately;\n(d): offline regularization strategies employing \u03b2\u00af2subscript\u00af\ud835\udefd2\\bar{\\beta}_{2}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2\u00af3subscript\u00af\ud835\udefd3\\bar{\\beta}_{3}over\u00af start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT jointly;\n(e)-(f): online regularization strategies employing \u03b2^2subscript^\ud835\udefd2\\hat{\\beta}_{2}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and \u03b2^3subscript^\ud835\udefd3\\hat{\\beta}_{3}over^ start_ARG italic_\u03b2 end_ARG start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT separately.",
|
| 115 |
+
"url": "http://arxiv.org/html/2304.00263v2/x12.png"
|
| 116 |
+
}
|
| 117 |
+
},
|
| 118 |
+
"validation": true,
|
| 119 |
+
"references": [],
|
| 120 |
+
"url": "http://arxiv.org/html/2304.00263v2"
|
| 121 |
+
}
|
20240323/2304.14670v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2305.15253v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2305.16582v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2306.04212v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2306.16788v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2308.11585v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2309.01157v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2309.03103v2.json
ADDED
|
@@ -0,0 +1,291 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure",
|
| 3 |
+
"abstract": "This paper presents ContrastWSD, a RoBERTa-based metaphor detection model that integrates the Metaphor Identification Procedure (MIP) and Word Sense Disambiguation (WSD) to extract and contrast the contextual meaning with the basic meaning of a word to determine whether it is used metaphorically in a sentence. By utilizing the word senses derived from a WSD model, our model enhances the metaphor detection process and outperforms other methods that rely solely on contextual embeddings or integrate only the basic definitions and other external knowledge. We evaluate our approach on various benchmark datasets and compare it with strong baselines, indicating the effectiveness in advancing metaphor detection. \n\n\nKeywords:\u2009Metaphor Detection, Word Sense Disambiguation, Metaphor Identification Procedure",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "1. Introduction",
|
| 9 |
+
"text": "A metaphor, is a rhetorical device that compares, implicitly, two objects or concepts that are seemingly dissimilar but share symbolic or figurative similarities, with the intention of illuminating a fresh perspective and a more elaborate and nuanced comprehension of the world. Metaphors are not only intrinsic to creative writing, but they are also ubiquitous in human communication. Metaphors typically involve employing words in a manner that diverges from their basic definition, and their figurative sense is dependent on the context in which they are used. While novelty is an indicator of greater creativity in metaphors, sometimes they become widely used and established in the language, ultimately entering the lexicon as conventional metaphors, also known as dead metaphors Lakoff and Johnson (2008 ###reference_b10###).\nAutomatic metaphor detection, the process of identifying metaphoric expressions within a given text, is essential for various Natural Language Processing (NLP) tasks, such as sentiment analysis, text paraphrasing, and machine translation Mao et al. (2018 ###reference_b15###). The development of metaphor detection models presents a significant challenge, as it requires the identification and analysis of both the basic and contextual meanings of words within their respective contexts, as recommended by the Metaphor Identification Procedure (MIP) Pragglejaz Group (2007 ###reference_b18###); Steen et al. (2010 ###reference_b20###) (see Figure 1 ###reference_###).\nEarly approaches relied on extensive manual efforts in feature engineering. However, with the advent of word embedding techniques and neural networks, more efficient and effective methods emerged for this task Song et al. (2021 ###reference_b19###). Notably, transformer-based models have demonstrated promising capabilities in detecting metaphors Su et al. (2020 ###reference_b21###); Choi et al. (2021 ###reference_b8###); Babieno et al. (2022 ###reference_b1###); Li et al. (2023 ###reference_b14###). Despite these advancements, there is still scope for further improvement to simulate the Metaphor Identification Procedure effectively. Therefore, the main objective of this study is to investigate the efficacy of transformer-based models in word sense disambiguation and in following the systematic Metaphor Identification Procedure to extract and contrast between the contextual word sense and the basic definitions of a target word to enhance automatic metaphor detection.\nOur proposed method is evaluated on established benchmark datasets, and the results demonstrate significant improvements. Comparatively, our model consistently achieves superior (and occasionally comparable) precision, recall, and F1 scores when compared to other recent and robust metaphor detection models.\n###figure_1###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "2. Related Work",
|
| 15 |
+
"text": "Metaphor detection has been a subject of active research in NLP for several years. Traditional approaches to metaphor detection have relied on hand-crafted or automatically acquired linguistic features, but recent advancements in NLP have resulted in the development of transformer-based pre-trained models that have demonstrated state-of-the-art performance across various NLP tasks Choi et al. (2021 ###reference_b8###); Song et al. (2021 ###reference_b19###).\nDeepMet Su et al. (2020 ###reference_b21###) formulates metaphor detection as a reading comprehension task and uses a RoBERTa-based model that incorporates global and local text context, question information, and part-of-speech as features to detect metaphors.\nMelBERT Choi et al. (2021 ###reference_b8###) is a RoBERTa-based model that incorporates linguistic metaphor identification theories. MelBERT captures the context-dependent nature of metaphors and has demonstrated state-of-the-art performance on multiple benchmark datasets. While the authors considered the MIP procedure in their design, their focus was on leveraging contextual and out-of-context word embeddings to represent the word sense and basic definitions of the word. However, utilizing contextual word embeddings may not always accurately represent the word sense definition; instead, it may lean more towards the general contextual meaning. Similarly, out-of-context word embeddings may not necessarily reflect the basic meaning of the word, as they may be influenced by the frequent meaning, which might not align with the word\u2019s basic sense Pragglejaz Group (2007 ###reference_b18###); Babieno et al. (2022 ###reference_b1###). In contrast, we encode both the contextual and basic definitions of the target words, which are extracted from the dictionary. This enables us to provide a more comprehensive understanding of their meanings and better align with the MIP procedure.\nResearchers have also explored the use of external knowledge sources, such as definitions, word senses, and frames, to enhance the performance of metaphor detection models. For instance, Wan et al. ###reference_b22### (2021 ###reference_b22###) used gloss definitions to improve metaphor detection by considering both contextual embedding and contextual definition that best fits the context. Similarly, Babieno et al. ###reference_b1### (2022 ###reference_b1###) explored the integration of the most basic definitions from Wiktionary to improve MelBERT\u2019s performance, achieving comparable or superior results. In contrast, our model extracts the contextualized definitions and contrasts them with the basic definitions to align with the MIP procedure. FrameBERT Li et al. (2023 ###reference_b14###) proposed a new approach that incorporates FrameNet Baker et al. (1998 ###reference_b2###) embeddings to detect concept-level metaphors, achieving comparable or superior performance to existing models. Although encoding concepts may improve the model\u2019s understanding of similarities, it might not fully capture the variations in word meanings in various contexts.\nWord-Sense Disambiguation (WSD), which involves identifying the correct sense of a word in context, is a challenging task in NLP with various applications. Our study shows that WSD can aid in the process of identifying metaphors by disambiguating the word sense in given contexts. Multiple state-of-the-art WSD models have been proposed, including a modified version of BERT Yap et al. (2020 ###reference_b24###) trained on a combination of gloss selection and example sentence classification objectives. Bevilacqua and Navigli ###reference_b3### (2020 ###reference_b3###) propose a method for incorporating knowledge graph information into WSD systems to improve their performance. The authors use a large-scale knowledge graph (DBpedia) to provide additional context and semantic information for each word in the text. SenseBERT Levine et al. (2020 ###reference_b13###) pre-trains BERT on a large-scale sense-annotated corpus using a modified loss function to incorporate sense-aware training objectives.\nIncorporating WSD models to obtain contextual definitions for use in metaphor detection has also been explored. For example, Metaphorical Polysemy Detection (MPD) Maudslay and Teufel (2022 ###reference_b16###) has been proposed, which focuses on detecting conventional metaphors in WordNet. By combining MPD with a WSD model, this method can determine whether a target word represents a conventional metaphor within a given sentence. The authors have identified the two limitations mentioned earlier regarding MelBERT, which hinder its alignment with the MIP procedure. Particularly, attempting to implicitly infer word sense from the target word\u2019s contextual representation and assuming that the out-of-context embedding of the target word represents its basic meaning.\nTo address these issues, the authors trained an MPD model jointly with a WSD model to detect the metaphoricity of a target word, leveraging the word senses predicted by the WSD model for the target word in a given context. However, we argue that this model still lacks full alignment with MIP since it implicitly contrasts the basic definition with the word sense.\nTo achieve a more explicit alignment with MIP, we propose a different approach. Firstly, we utilize a WSD model to extract the word sense from a lexicon. Secondly, we tackle the second problem by considering the first definition listed in Wiktionary as the basic definition. This choice aligns with the dictionary\u2019s recommendation to utilize the logical hierarchy of word senses Babieno et al. (2022 ###reference_b1###). The explicit contrast between the basic and word sense definitions corresponds better to steps 3 to 5 outlined in the flowchart of the MIP procedure (see Figure 1 ###reference_###), as we elaborate on in the following section.\n###figure_2###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "3. Methodology",
|
| 21 |
+
"text": "In this section, we present the methodology used to develop and train ContrastWSD. Figure 2 ###reference_### provides an illustration of the data augmentation process and the subsequent metaphor detection process. We commence by introducing the datasets utilized in the study, followed by an overview of the word-sense augmentation procedure. Subsequently, we outline the modifications that were made to the MelBERT model\u2019s structure to enhance metaphor detection."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "3.1. Data Augmentation",
|
| 27 |
+
"text": "A major contribution of our research is the adherence to the systematic approach of the Metaphor Identification Procedure for detecting linguistic metaphors in the VUA datasets. The MIP procedure (as outlined in Figure 1 ###reference_###) involves: (1) comprehending the general meaning of the text, (2) determining lexical units, (3) identifying the contextual meaning of the units, (4) and verifying if there is a more basic meaning. (5) If the contextual meaning deviates from the basic meaning but remains understandable by comparison, (6) the unit is labeled as metaphorical.\nTo align with MIP, we augmented the existing datasets used in the MelBERT model through a two-step procedure. Firstly, we employed a BERT WSD model fine-tuned on a sequence-pair ranking task Yap et al. (2020 ###reference_b24###) to extract the word sense contextual definition of the target word from WordNet. To retrieve the contextual word sense, we feed a sentence to the WSD model, where [TGT] is a special token marking the location of the target word . The WSD model then performs gloss selection from WordNet and chooses the best definition of that fits the context. Secondly, we retrieved the basic definitions from the datasets compiled by Babieno et al. ###reference_b1### (2022 ###reference_b1###). The authors selected the first definition listed in the Wiktionary dictionary as the basic definition, following the dictionary\u2019s recommendation to utilize the logical hierarchy of word senses in their guidelines."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "3.2. Model Structure",
|
| 33 |
+
"text": "Our approach to metaphor detection, involves treating it as a binary classification problem based on the target word in a given sentence . Our aim is to predict whether the target word is being used metaphorically or literally in the sentence. To accomplish this, we leverage the contextual and basic meanings of the target word in the given sentence. Our model utilizes three separate RoBERTa models to encode the sentence , the isolated target word , as well as the contextual and basic definitions and .\nFollowing the MelBERT design, we modify the sentence by appending the POS tag to its end, and we enclose it with two segment separation tokens [SEP]. Additionally, we employ three types of extra embeddings: (1) The target embedding, used to indicate the target word. (2) The local context embedding, which marks either the words in the clause containing the target word between two commas or the definition and word sense. (3) The POS embedding, used to mark the position of the POS tag. We incorporate the target word and the definitions by prepending the word to the definitions we retrieve from WordNet (for the word sense) or Wiktionary (for the basic definition). This format is similar to how words appear in a dictionary, and we do this to utilize their hidden representation later in the model. We separate the target word from the definitions using the segment separation token [SEP].\nThe RoBERTa models produce the hidden representations , , and encoding the sentence, the contextual definition, and the basic definition, respectively with the extensions as described. and are produced by averaging the embedding of all the output tokens.\nSince the MIP procedure focuses on the semantic contrast between the basic and contextual meanings of a word, we encode the basic and contextual meanings. Thus, our MIP layer uses the word sense embedding and the basic definition embedding . We also use cosine similarity to measure the semantic gap between the embeddings, similar to the approach employed by Babieno et al. ###reference_b1### (2022 ###reference_b1###).\nWhere is a fully-connected layer. We have also introduced two additional helper layers to our model. The first layer learns the relationship between the target word\u2019s contextual embedding and the target word embedding adjacent to the word\u2019s sense , while the second layer learns the relationship between the target word\u2019s contextual embedding and the target word embedding adjacent to the word\u2019s basic definition . We believe that these helper layers will assist the MIP layer when the WSD model fails to distinguish between the word sense and the basic definition, particularly in the case of detecting novel metaphors that lack multiple definitions in the dictionary.\nWe concatenate the hidden vectors from the MIP and the two helper layers before feeding them to the final binary classification layer for metaphor prediction."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "4. Experiments",
|
| 39 |
+
"text": "In this section, we present the datasets, baseline models, and experimental setup used to evaluate our model. We also discuss various aspects of our experimentation process."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "5",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "5. Empirical Results and Case Studies",
|
| 45 |
+
"text": "In this paper, we presented a RoBERTa-based model for metaphor detection that follows the Metaphor Identification Procedure by utilizing a WSD model to extract and contrast the contextual meaning with the basic meaning of a target word. We evaluated our model on several benchmark datasets and demonstrated that leveraging senses and contrasting them can enhance the performance of metaphor detection models. Our proposed model outperformed other state-of-the-art metaphor detection models. Our work provides compelling evidence for further exploration of the use of WSD models and sense-contrasting techniques to enhance the performance of metaphor detection models.\nIn future work, we plan to investigate the integration of commonsense models such as COMET Bosselut et al. (2019 ###reference_b5###) to extract and utilize the common sense knowledge from the target word. COMET was used extensively in recent metaphor generation models Elzohbi and Zhao (2023 ###reference_b9###). We believe that this integration will enable better differentiation of novel metaphors from nonsensical expressions."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "6",
|
| 49 |
+
"parent_section_id": null,
|
| 50 |
+
"section_name": "6. Conclusion and Future Work",
|
| 51 |
+
"text": "In this paper, we presented a RoBERTa-based model for metaphor detection that follows the Metaphor Identification Procedure by utilizing a WSD model to extract and contrast the contextual meaning with the basic meaning of a target word. We evaluated our model on several benchmark datasets and demonstrated that leveraging senses and contrasting them can enhance the performance of metaphor detection models. Our proposed model outperformed other state-of-the-art metaphor detection models. Our work provides compelling evidence for further exploration of the use of WSD models and sense-contrasting techniques to enhance the performance of metaphor detection models.\nIn future work, we plan to investigate the integration of commonsense models such as COMET Bosselut et al. (2019 ###reference_b5### ###reference_b5###) to extract and utilize the common sense knowledge from the target word. COMET was used extensively in recent metaphor generation models Elzohbi and Zhao (2023 ###reference_b9### ###reference_b9###). We believe that this integration will enable better differentiation of novel metaphors from nonsensical expressions."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "7",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "7. Bibliographical References",
|
| 57 |
+
"text": ""
|
| 58 |
+
}
|
| 59 |
+
],
|
| 60 |
+
"appendix": [],
|
| 61 |
+
"tables": {
|
| 62 |
+
"1": {
|
| 63 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.1\" style=\"width:433.6pt;height:650.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(123.4pt,-185.2pt) scale(2.32019704151959,2.32019704151959) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T1.1.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T1.1.1.1.1.2\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.3\">Rec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.4\">Prec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.1.1.1.1.5\">F1</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.2.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T1.1.1.2.1.1.1\">VUA-18</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.2.1.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.3\">77.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.4\">79.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.5\">78.66</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.3.2.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.2\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T1.1.1.3.2.2.1\">77.88</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.3.2.3.1\">80.31</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T1.1.1.3.2.4.1\">79.07</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.4.3.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.2\">76.78</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.3\">79.33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.4\">78.03</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.5.4\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.5.4.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.5.4.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.5.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.5.4.3.1\">78.85</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.5.4.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T1.1.1.5.4.4.1\">80.16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.5.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.5.4.5.1\">79.50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.6.5.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T1.1.1.6.5.1.1\">VUA-18 (-)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.6.5.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.6.5.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T1.1.1.6.5.3.1\">73.34</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.6.5.4\">71.44</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.6.5.5\">72.35</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.7.6.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.2\">70.52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.7.6.3.1\">74.77</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T1.1.1.7.6.4.1\">72.55</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.8.7.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.2\">70.33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T1.1.1.8.7.3.1\">73.81</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.4\">72.02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.9.8\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.9.8.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.9.8.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.9.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.9.8.3.1\">75.23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.9.8.4\">72.38</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.9.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.9.8.5.1\">73.77</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.10.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.10.9.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T1.1.1.10.9.1.1\">VUA-20</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.10.9.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.10.9.3\">69.51</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.10.9.4\">75.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.10.9.5\">72.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.11.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.11.10.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.11.10.2.1\">69.98</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T1.1.1.11.10.3.1\">75.64</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T1.1.1.11.10.4.1\">72.70</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.12.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.12.11.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.2\">69.30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.3\">75.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.4\">72.31</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.13.12\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.13.12.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.13.12.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.13.12.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T1.1.1.13.12.3.1\">69.89</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.13.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.13.12.4.1\">76.60</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.13.12.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.13.12.5.1\">73.09</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.14.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.13.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T1.1.1.14.13.1.1\">VUA-20 (-)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.14.13.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.14.13.3\">74.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.14.13.4\">71.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.14.13.5\">72.88</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.15.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.15.14.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.2\">77.22</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.3\">68.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.4\">72.51</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.16.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T1.1.1.16.15.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.2\">75.07</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.3\">71.59</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.4\">73.28</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.17.16\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T1.1.1.17.16.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T1.1.1.17.16.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.1.17.16.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.17.16.3.1\">75.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.1.17.16.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.17.16.4.1\">72.95</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T1.1.1.17.16.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.17.16.5.1\">74.22</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Evaluation results on VUA-18, VUA-20, and their counter pruned datasets. A bold number corresponds to the best performing model, while the underlined number the second best.</figcaption>\n</figure>",
|
| 64 |
+
"capture": "Table 1: Evaluation results on VUA-18, VUA-20, and their counter pruned datasets. A bold number corresponds to the best performing model, while the underlined number the second best."
|
| 65 |
+
},
|
| 66 |
+
"2": {
|
| 67 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.1\" style=\"width:433.6pt;height:664.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(125.3pt,-192.1pt) scale(2.36950263801334,2.36950263801334) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T2.1.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S4.T2.1.1.1.1.2\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.3\">Rec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.4\">Prec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.1.1.1.1.5\">F1</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.2.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.2.1.1.1\">verb</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.2.1.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.2.1.3.1\">80.25</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.4\">71.88</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.5\">75.83</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.3.2.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.3.2.2.1\">81.85</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.3\">70.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.3.2.4.1\">75.97</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.4.3.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.2\">75.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.4.3.3.1\">74.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.4\">74.55</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.5.4\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.5.4.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.5.4.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.5.4.3\">78.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.5.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.5.4.4.1\">75.70</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.5.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.5.4.5.1\">77.25</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.6.5.1.1\">noun</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.6.5.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.6.5.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.6.5.3.1\">66.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.6.5.4\">73.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.6.5.5\">(70.19)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.7.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.7.6.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.7.6.2.1\">68.54</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.6.3\">73.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.7.6.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.7.6.4.1\">(70.76)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.8.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.8.7.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.7.2\">62.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.7.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.8.7.3.1\">75.35</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.8.7.4\">68.35</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.9.8\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.9.8.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.9.8.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.9.8.3\">66.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.9.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.9.8.4.1\">76.36</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.9.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.9.8.5.1\">70.97</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.10.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.10.9.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.10.9.1.1\">adverb</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.10.9.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.10.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.10.9.3.1\">69.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.10.9.4\">71.83</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.10.9.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.10.9.5.1\">70.43</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.11.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.11.10.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.11.10.2\">64.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.11.10.3\">71.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.11.10.4\">67.80</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.12.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.12.11.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.12.11.2\">65.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.12.11.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.12.11.3.1\">74.56</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.12.11.4\">69.88</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.13.12\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.13.12.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.13.12.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.13.12.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.13.12.3.1\">68.52</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.13.12.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.13.12.4.1\">77.45</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.13.12.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.13.12.5.1\">72.63</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.14.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.14.13.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S4.T2.1.1.14.13.1.1\">adjective</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.14.13.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.14.13.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.14.13.3.1\">67.06</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.14.13.4\">65.97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.14.13.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.14.13.5.1\">66.47</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.15.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.15.14.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.15.14.2\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.15.14.2.1\">66.35</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.15.14.3\">64.39</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.15.14.4\">65.34</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.16.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S4.T2.1.1.16.15.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.16.15.2\">64.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.16.15.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S4.T2.1.1.16.15.3.1\">68.09</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.16.15.4\">66.06</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.17.16\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T2.1.1.17.16.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S4.T2.1.1.17.16.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.17.16.3\">65.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.17.16.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.17.16.4.1\">70.83</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S4.T2.1.1.17.16.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.17.16.5.1\">68.18</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Performance comparison by POS tags. The results in between brackets indicate no statistically significant differences compared to ContrastWSD.</figcaption>\n</figure>",
|
| 68 |
+
"capture": "Table 2: Performance comparison by POS tags. The results in between brackets indicate no statistically significant differences compared to ContrastWSD."
|
| 69 |
+
},
|
| 70 |
+
"3": {
|
| 71 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T3.1\" style=\"width:433.6pt;height:460.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(115.9pt,-123.1pt) scale(2.1475146569055,2.1475146569055) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.2.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.2.1.2\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.1.1.2.1.3\">Rec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.1.1.2.1.4\">Prec</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T3.1.1.2.1.5\">F1</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.3.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.1.1.3.1.1.1\">VUAverb</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.3.1.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.3.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.3.1.3.1\">81.08</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.3.1.4\">55.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.3.1.5\">65.57</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.4.2.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.4.2.2\">77.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.4.2.3\">61.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.4.2.4\">68.68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.5.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.5.3.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.5.3.2\">73.33</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.5.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.5.3.3.1\">71.95</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.5.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.5.3.4.1\">(72.62)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.6.4\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.6.4.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.6.4.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.6.4.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T3.1.1.6.4.3.1\">79.18</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.6.4.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T3.1.1.6.4.4.1\">66.97</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.6.4.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T3.1.1.6.4.5.1\">72.54</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.7.5.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.1.1.7.5.1.1\">VUAverb (-)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.7.5.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.7.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.7.5.3.1\">81.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.7.5.4\">51.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.7.5.5\">62.87</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.8.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.8.6.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.8.6.2\">79.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.8.6.3\">59.46</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.8.6.4\">67.92</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.9.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.9.7.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.9.7.2\">74.63</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.9.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.9.7.3.1\">70.68</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.9.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.9.7.4.1\">(72.56)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.10.8\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.10.8.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.10.8.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.10.8.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T3.1.1.10.8.3.1\">79.28</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.10.8.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T3.1.1.10.8.4.1\">66.66</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.10.8.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T3.1.1.10.8.5.1\">72.42</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.1.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.1.1.1.1.1\">VUAverb ()</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.1.2\">MelBERT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.3\">72.22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.4\">76.45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.5\">74.27</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.11.9\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.11.9.1\">MsW_cos</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.11.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.11.9.2.1\">75.09</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.11.9.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T3.1.1.11.9.3.1\">78.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.11.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.11.9.4.1\">(76.51)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.12.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T3.1.1.12.10.1\">FrameBERT</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.12.10.2\">69.96</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.12.10.3\">77.60</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.1.12.10.4\">73.55</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.13.11\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S5.T3.1.1.13.11.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S5.T3.1.1.13.11.2\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.1.1.13.11.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T3.1.1.13.11.3.1\">73.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.1.1.13.11.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.1.13.11.4.1\">78.39</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.1.1.13.11.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T3.1.1.13.11.5.1\">76.03</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Evaluation results on VUA-verb, VUA-verb (-), and on the VUA-verb () datasets.</figcaption>\n</figure>",
|
| 72 |
+
"capture": "Table 3: Evaluation results on VUA-verb, VUA-verb (-), and on the VUA-verb () datasets."
|
| 73 |
+
},
|
| 74 |
+
"4": {
|
| 75 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T4.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S5.T4.1.2.1.1\">Model</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.T4.1.2.1.2\">F1 - All</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.T4.1.2.1.3\">F1 - Conv</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S5.T4.1.3.1.1\">MPD_WSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.1.3.1.2\">63.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.1.3.1.3\">65.90</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S5.T4.1.1.1\">ContrastWSD ()</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.1.1.2\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T4.1.1.2.1\">73.42</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.1.1.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T4.1.1.3.1\">72.31</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S5.T4.1.4.2.1\">ContrastWSD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T4.1.4.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.4.2.2.1\">73.84</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.1.4.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.4.2.3.1\">73.07</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Performance comparison on the VUA-MPD-All and the VUA-MPD-Conv subsets.</figcaption>\n</figure>",
|
| 76 |
+
"capture": "Table 4: Performance comparison on the VUA-MPD-All and the VUA-MPD-Conv subsets."
|
| 77 |
+
},
|
| 78 |
+
"5": {
|
| 79 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.SS0.SSS0.Px2.tab1\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.SS0.SSS0.Px2.tab1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.1.1\" style=\"width:6.9pt;height:47.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:47.9pt;transform:translate(-20.5pt,-20.5pt) rotate(-90deg) ;\">\n<p class=\"ltx_p\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.1.1.1.1\">True Label</span></p>\n</span></div>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.2\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.2.1\" style=\"width:6.8pt;height:61.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:61.6pt;transform:translate(-27.39pt,-27.39pt) rotate(-90deg) ;\">\n<p class=\"ltx_p\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.2.1.1.1\">ContrastWSD</span></p>\n</span></div>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.3\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.3.1\" style=\"width:6.8pt;height:55pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:55.0pt;transform:translate(-24.1pt,-24.1pt) rotate(-90deg) ;\">\n<p class=\"ltx_p\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.3.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.3.1.1.1\">FrameBERT</span></p>\n</span></div>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.4\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.4.1\" style=\"width:9.0pt;height:44.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:44.3pt;transform:translate(-17.64pt,-16.64pt) rotate(-90deg) ;\">\n<p class=\"ltx_p\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.4.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.4.1.1.1\">MsW_cos</span></p>\n</span></div>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.5\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.5.1\" style=\"width:6.9pt;height:44pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:44.0pt;transform:translate(-18.54pt,-18.54pt) rotate(-90deg) ;\">\n<p class=\"ltx_p\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.5.1.1.1\">MelBERT</span></p>\n</span></div>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.6\">\n<span class=\"ltx_text\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.6.1.1\">Sentence Context</span> </span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.7\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.7.1\">Word Sense</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.8\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.1.1.8.1\">Basic Definition</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.3\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.4\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.5\">\u2715</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.6\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.6.1\">\u2026 and pull all nuclear <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.6.1.1\" style=\"color:#0000FF;\">plant</span> out of the impending sale \u2026</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.7\" style=\"width:85.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.7.1\">Buildings for carrying on industrial labor.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.8\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.2.1.8.1\">An organism that is not an animal, especially an organism capable of photosynthesis.</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.1\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.2\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.3\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.4\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.5\">\u2715</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.6\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.6.1\">It is a good time to <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.6.1.1\" style=\"color:#0000FF;\">plant</span> hardy shrubs too.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.7\" style=\"width:85.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.7.1\">(botany) a living organism lacking the power of locomotion.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.8\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.3.2.8.1\">An organism that is not an animal, especially an organism capable of photosynthesis.</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.4\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.5\">\u2715</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.6\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.6.1\">\u2026 1,500-tonne consignment of Canadian PCBs (polychlorinated biphenyls) bound for this <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.6.1.1\" style=\"color:#0000FF;\">plant</span> in Gwent.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.7\" style=\"width:85.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.7.1\">Buildings for carrying on industrial labor.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.8\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.4.3.8.1\">An organism that is not an animal, especially an organism capable of photosynthesis.</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.3\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.4\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.5\">\u2715</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.6\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.6.1\">Stars appear and the shadows are fallin\u2019. You can hear my <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.6.1.1\" style=\"color:#0000FF;\">honey</span> callin\u2019</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.7\" style=\"width:85.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.7.1\">A beloved person; used as terms of endearment.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.8\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.5.4.8.1\">A viscous, sweet fluid produced from plant nectar by bees \u2026</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.5\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.6\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.6.1\">\u2026 but the tops of the mountains are still golden, as though <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.6.1.1\" style=\"color:#0000FF;\">honey</span> had been poured lightly over them \u2026</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.7\" style=\"width:85.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.7.1\">A sweet yellow liquid produced by bees.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.8\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.6.5.8.1\">A viscous, sweet fluid produced from plant nectar by bees \u2026</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.1\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.2\">\u2715</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.5\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.6\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.6.1\">\u2026 I\u2019ll be <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.6.1.1\" style=\"color:#0000FF;\">jumping</span> up and down like a what d\u2019 ya call it!</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.7\" style=\"width:85.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.7.1\">Move forward by leaps and bounds.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.8\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.7.6.8.1\">To propel oneself rapidly upward, downward and/or in any horizontal direction \u2026</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.1\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.2\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.5\">\u2713</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.6\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.6.1\">\u2026 there are plenty of girls who would <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.6.1.1\" style=\"color:#0000FF;\">jump</span> at the chance.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_r ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.7\" style=\"width:85.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.7.1\">Enter eagerly into.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.8\" style=\"width:113.8pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.SS0.SSS0.Px2.tab1.1.8.7.8.1\">To propel oneself rapidly upward, downward and/or in any horizontal direction \u2026</p>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Examples of predictions made by ContrastWSD and the baseline models on the VUA-20 testing dataset. \u2713 marks a metaphor prediction and \u2715 marks a literal prediction.</figcaption>\n<section class=\"ltx_paragraph\" id=\"S5.SS0.SSS0.Px3\">\n<h4 class=\"ltx_title ltx_title_paragraph\">Case Studies:</h4>\n<div class=\"ltx_para\" id=\"S5.SS0.SSS0.Px3.p1\">\n<p class=\"ltx_p\" id=\"S5.SS0.SSS0.Px3.p1.1\">As shown in the results, our ContrastWSD model exhibits relatively higher gains. To exemplify instances where ContrastWSD correctly labeled examples that were incorrectly labeled by the baselines, we conducted several case studies. Table <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.03103v2#S5.SS0.SSS0.Px2\" title=\"Overall Results \u2023 5. Empirical Results and Case Studies \u2023 ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure\"><span class=\"ltx_text ltx_ref_tag\">5</span></a> presents a few cases that demonstrate the benefits of our approach, involving contrasting word senses while considering the context of the target sentence. For these case studies, we selected the models trained on the VUA-20 dataset. For each model, we chose the best performing one among the 5 seeds. The examples shown in the table are drawn from the VUA-20 testing dataset.</p>\n</div>\n<div class=\"ltx_para\" id=\"S5.SS0.SSS0.Px3.p2\">\n<p class=\"ltx_p\" id=\"S5.SS0.SSS0.Px3.p2.2\">For instance, we observed the word \u201cplant\u201d mentioned 15 times in the testing dataset: 14 times in a metaphorical sense and only once in a non-metaphorical sense. Both MsW_cos and MelBERT labeled all of these occurrences as non-metaphorical. In contrast, our model correctly identified of these occurrences, while FrameBERT only identified correctly. The other models have only recognized the non-metaphorical instance and none of the metaphorical instances correctly. Three of these examples are mentioned in the table.</p>\n</div>\n<div class=\"ltx_para\" id=\"S5.SS0.SSS0.Px3.p3\">\n<p class=\"ltx_p\" id=\"S5.SS0.SSS0.Px3.p3.1\">Another example involves the word \u201choney\u201d, which was mentioned twice as a metaphor in the testing dataset. In the first instance, it was used in a conventional way, and our model correctly annotated it as metaphorical, leveraging the extracted word sense. The other models did not recognize this metaphorical usage. In the second instance, \u201choney\u201d appeared as a novel metaphor where our model, along with the other baseline models, marked it as metaphorical. Even though the word sense was similar to the basic sense, our model still identified it as metaphorical. This indicates that our model can recognize both novel and conventional metaphors.</p>\n</div>\n<div class=\"ltx_para\" id=\"S5.SS0.SSS0.Px3.p4\">\n<p class=\"ltx_p\" id=\"S5.SS0.SSS0.Px3.p4.1\">Finally, the word \u201cjump\u201d occurs two times in the testing dataset, appearing in two different tenses and senses. In one instance, the word \u201cjumping\u201d was used in the literal sense, and our model correctly identified it as literal, considering that the word sense was similar to the definition. However, the other models did not recognize it as such. In the other occurrence, \u201cjump\u201d was used metaphorically, and our model, along with the other models, correctly identified it as metaphorical.</p>\n</div>\n<section class=\"ltx_section\" id=\"S6\">\n<h2 class=\"ltx_title ltx_font_bold ltx_title_section\" style=\"font-size:120%;\">6.\u00a0\u00a0\u00a0Conclusion and Future Work</h2>\n<div class=\"ltx_para\" id=\"S6.p1\">\n<p class=\"ltx_p\" id=\"S6.p1.1\">In this paper, we presented a RoBERTa-based model for metaphor detection that follows the Metaphor Identification Procedure by utilizing a WSD model to extract and contrast the contextual meaning with the basic meaning of a target word. We evaluated our model on several benchmark datasets and demonstrated that leveraging senses and contrasting them can enhance the performance of metaphor detection models. Our proposed model outperformed other state-of-the-art metaphor detection models. Our work provides compelling evidence for further exploration of the use of WSD models and sense-contrasting techniques to enhance the performance of metaphor detection models.</p>\n</div>\n<div class=\"ltx_para\" id=\"S6.p2\">\n<p class=\"ltx_p\" id=\"S6.p2.1\">In future work, we plan to investigate the integration of commonsense models such as COMET <cite class=\"ltx_cite ltx_citemacro_cite\">Bosselut et\u00a0al. (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.03103v2#bib.bib5\" title=\"\">2019 ###reference_b5### ###reference_b5###</a>)</cite> to extract and utilize the common sense knowledge from the target word. COMET was used extensively in recent metaphor generation models <cite class=\"ltx_cite ltx_citemacro_cite\">Elzohbi and Zhao (<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.03103v2#bib.bib9\" title=\"\">2023 ###reference_b9### ###reference_b9###</a>)</cite>. We believe that this integration will enable better differentiation of novel metaphors from nonsensical expressions.</p>\n</div>\n<section class=\"ltx_section\" id=\"S7\">\n<h2 class=\"ltx_title ltx_font_bold ltx_title_section\" style=\"font-size:120%;\">7.\u00a0\u00a0\u00a0Bibliographical References</h2>\n<span class=\"ltx_ERROR undefined\" id=\"S7.1\">\\c@NAT@ctr</span>\n<section class=\"ltx_bibliography\" id=\"bib\">\n<h2 class=\"ltx_title ltx_title_bibliography\"></h2>\n<ul class=\"ltx_biblist\">\n<li class=\"ltx_bibitem\" id=\"bib.bib1\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Babieno et\u00a0al. (2022)</span>\n<span class=\"ltx_bibblock\">\nMateusz Babieno, Masashi Takeshita, Dusan Radisavljevic, Rafal Rzepka, and\nKenji Araki. 2022.\n\n</span>\n<span class=\"ltx_bibblock\">Miss roberta wilde: Metaphor identification using masked language\nmodel with wiktionary lexical definitions.\n\n</span>\n<span class=\"ltx_bibblock\"><em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib1.1.1\">Applied Sciences</em>, 12(4):2081.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib2\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Baker et\u00a0al. (1998)</span>\n<span class=\"ltx_bibblock\">\nCollin\u00a0F Baker, Charles\u00a0J Fillmore, and John\u00a0B Lowe. 1998.\n\n</span>\n<span class=\"ltx_bibblock\">The berkeley framenet project.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib2.1.1\">Proceedings of the 17th International Conference on\nComputational Linguistics (ICCL)</em>, pages 86\u201390.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib3\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Bevilacqua and Navigli (2020)</span>\n<span class=\"ltx_bibblock\">\nMichele Bevilacqua and Roberto Navigli. 2020.\n\n</span>\n<span class=\"ltx_bibblock\">Breaking through the 80% glass ceiling: Raising the state of the art\nin word sense disambiguation by incorporating knowledge graph information.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib3.1.1\">Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics</em>, pages 2854\u20132864.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib4\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Birke and Sarkar (2006)</span>\n<span class=\"ltx_bibblock\">\nJulia Birke and Anoop Sarkar. 2006.\n\n</span>\n<span class=\"ltx_bibblock\">A clustering approach for nearly unsupervised recognition of\nnonliteral language.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib4.1.1\">11th Conference of the European Chapter of the Association\nfor Computational Linguistics</em>, pages 329\u2013336.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib5\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Bosselut et\u00a0al. (2019)</span>\n<span class=\"ltx_bibblock\">\nAntoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli\nCelikyilmaz, and Yejin Choi. 2019.\n\n</span>\n<span class=\"ltx_bibblock\">Comet: Commonsense transformers for automatic knowledge graph\nconstruction.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib5.1.1\">Proceedings of the 57th Annual Meeting of the Association\nfor Computational Linguistics</em>, pages 4762\u20134779.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib6\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Bulat et\u00a0al. (2017)</span>\n<span class=\"ltx_bibblock\">\nLuana Bulat, SC\u00a0Clark, and Ekaterina Shutova. 2017.\n\n</span>\n<span class=\"ltx_bibblock\">Modelling metaphor with attribute-based semantics.\n\n</span>\n<span class=\"ltx_bibblock\">Association for Computational Linguistics.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib7\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Chen et\u00a0al. (2020)</span>\n<span class=\"ltx_bibblock\">\nXianyang Chen, Chee\u00a0Wee Leong, Michael Flor, and Beata\u00a0Beigman Klebanov. 2020.\n\n</span>\n<span class=\"ltx_bibblock\">Go figure! multi-task transformer-based architecture for metaphor\ndetection using idioms: Ets team in 2020 metaphor shared task.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib7.1.1\">Proceedings of the Second Workshop on Figurative Language\nProcessing</em>, pages 235\u2013243.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib8\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Choi et\u00a0al. (2021)</span>\n<span class=\"ltx_bibblock\">\nMinjin Choi, Sunkyung Lee, Eunseong Choi, Heesoo Park, Junhyuk Lee, Dongwon\nLee, and Jongwuk Lee. 2021.\n\n</span>\n<span class=\"ltx_bibblock\">Melbert: Metaphor detection via contextualized late interaction using\nmetaphorical identification theories.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib8.1.1\">Proceedings of the 2021 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies</em>, pages 1763\u20131773.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib9\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Elzohbi and Zhao (2023)</span>\n<span class=\"ltx_bibblock\">\nMohamad Elzohbi and Richard Zhao. 2023.\n\n</span>\n<span class=\"ltx_bibblock\">Creative data generation: A review focusing on text and poetry.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib9.1.1\">Proceedings of the 14th International Conference on\nComputational Creativity (ICCC)</em>, pages 29\u201338.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib10\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Lakoff and Johnson (2008)</span>\n<span class=\"ltx_bibblock\">\nGeorge Lakoff and Mark Johnson. 2008.\n\n</span>\n<span class=\"ltx_bibblock\"><em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib10.1.1\">Metaphors we live by</em>.\n\n</span>\n<span class=\"ltx_bibblock\">University of Chicago press.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib11\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Leong et\u00a0al. (2020)</span>\n<span class=\"ltx_bibblock\">\nChee\u00a0Wee Leong, Beata\u00a0Beigman Klebanov, Chris Hamill, Egon Stemle, Rutuja\nUbale, and Xianyang Chen. 2020.\n\n</span>\n<span class=\"ltx_bibblock\">A report on the 2020 vua and toefl metaphor detection shared task.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib11.1.1\">Proceedings of the second workshop on figurative language\nprocessing</em>, pages 18\u201329.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib12\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Leong et\u00a0al. (2018)</span>\n<span class=\"ltx_bibblock\">\nChee\u00a0Wee Leong, Beata\u00a0Beigman Klebanov, and Ekaterina Shutova. 2018.\n\n</span>\n<span class=\"ltx_bibblock\">A report on the 2018 vua metaphor detection shared task.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib12.1.1\">Proceedings of the Workshop on Figurative Language\nProcessing</em>, pages 56\u201366.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib13\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Levine et\u00a0al. (2020)</span>\n<span class=\"ltx_bibblock\">\nYoav Levine, Barak Lenz, Or\u00a0Dagan, Ori Ram, Dan Padnos, Or\u00a0Sharir, Shai\nShalev-Shwartz, Amnon Shashua, and Yoav Shoham. 2020.\n\n</span>\n<span class=\"ltx_bibblock\">Sensebert: Driving some sense into bert.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib13.1.1\">Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics</em>, pages 4656\u20134667.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib14\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Li et\u00a0al. (2023)</span>\n<span class=\"ltx_bibblock\">\nYucheng Li, Shun Wang, Chenghua Lin, Frank Guerin, and Lo\u00efc Barrault.\n2023.\n\n</span>\n<span class=\"ltx_bibblock\">Framebert: Conceptual metaphor detection with frame embedding\nlearning.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib14.1.1\">Proceedings of the 17th Conference of the European Chapter\nof the Association for Computational Linguistics</em>, pages 1550\u20131555.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib15\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Mao et\u00a0al. (2018)</span>\n<span class=\"ltx_bibblock\">\nRui Mao, Chenghua Lin, and Frank Guerin. 2018.\n\n</span>\n<span class=\"ltx_bibblock\">Word embedding and wordnet based metaphor identification and\ninterpretation.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib15.1.1\">Proceedings of the 56th Annual Meeting of the Association\nfor Computational Linguistics (Volume 1: Long Papers)</em>, pages 1222\u20131231.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib16\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Maudslay and Teufel (2022)</span>\n<span class=\"ltx_bibblock\">\nRowan\u00a0Hall Maudslay and Simone Teufel. 2022.\n\n</span>\n<span class=\"ltx_bibblock\">Metaphorical polysemy detection: Conventional metaphor meets word\nsense disambiguation.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib16.1.1\">Proceedings of the 29th International Conference on\nComputational Linguistics (ICCL)</em>, pages 65\u201377.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib17\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Mohammad et\u00a0al. (2016)</span>\n<span class=\"ltx_bibblock\">\nSaif Mohammad, Ekaterina Shutova, and Peter Turney. 2016.\n\n</span>\n<span class=\"ltx_bibblock\">Metaphor as a medium for emotion: An empirical study.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib17.1.1\">Proceedings of the Fifth Joint Conference on Lexical and\nComputational Semantics</em>, pages 23\u201333.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib18\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Pragglejaz\u00a0Group (2007)</span>\n<span class=\"ltx_bibblock\">\n<span class=\"ltx_text ltx_phantom\" id=\"bib.bib18.1.1\"><span style=\"visibility:hidden\">P</span></span> Pragglejaz\u00a0Group. 2007.\n\n</span>\n<span class=\"ltx_bibblock\">Mip: A method for identifying metaphorically used words in discourse.\n\n</span>\n<span class=\"ltx_bibblock\"><em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib18.2.1\">Metaphor and symbol</em>, 22(1):1\u201339.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib19\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Song et\u00a0al. (2021)</span>\n<span class=\"ltx_bibblock\">\nWei Song, Shuhui Zhou, Ruiji Fu, Ting Liu, and Lizhen Liu. 2021.\n\n</span>\n<span class=\"ltx_bibblock\">Verb metaphor detection via contextual relation learning.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib19.1.1\">Proceedings of the 59th Annual Meeting of the Association\nfor Computational Linguistics and the 11th International Joint Conference on\nNatural Language Processing (Volume 1: Long Papers)</em>, pages 4240\u20134251.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib20\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Steen et\u00a0al. (2010)</span>\n<span class=\"ltx_bibblock\">\nGerard Steen, Aletta\u00a0G Dorst, J\u00a0Berenike Herrmann, Anna Kaal, Tina Krennmayr,\nTrijntje Pasma, et\u00a0al. 2010.\n\n</span>\n<span class=\"ltx_bibblock\">A method for linguistic metaphor identification.\n\n</span>\n<span class=\"ltx_bibblock\"><em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib20.1.1\">Converging Evidence in Language and Communication Research.\nJohn Benjamins, Amsterdam</em>, 14.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib21\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Su et\u00a0al. (2020)</span>\n<span class=\"ltx_bibblock\">\nChuandong Su, Fumiyo Fukumoto, Xiaoxi Huang, Jiyi Li, Rongbo Wang, and Zhiqun\nChen. 2020.\n\n</span>\n<span class=\"ltx_bibblock\">DeepMet: A reading comprehension paradigm for token-level\nmetaphor detection.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib21.1.1\">Proceedings of the Second Workshop on Figurative Language\nProcessing (ACL)</em>, pages 30\u201339.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib22\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Wan et\u00a0al. (2021)</span>\n<span class=\"ltx_bibblock\">\nHai Wan, Jinxia Lin, Jianfeng Du, Dawei Shen, and Manrong Zhang. 2021.\n\n</span>\n<span class=\"ltx_bibblock\">Enhancing metaphor detection by gloss-based interpretations.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib22.1.1\">Findings of the Association for Computational Linguistics:\nACL-IJCNLP 2021</em>, pages 1971\u20131981.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib23\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Wu et\u00a0al. (2018)</span>\n<span class=\"ltx_bibblock\">\nChuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, and Yongfeng Huang.\n2018.\n\n</span>\n<span class=\"ltx_bibblock\">Neural metaphor detecting with cnn-lstm model.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib23.1.1\">Proceedings of the workshop on figurative language\nprocessing</em>, pages 110\u2013114.\n\n</span>\n</li>\n<li class=\"ltx_bibitem\" id=\"bib.bib24\">\n<span class=\"ltx_tag ltx_role_refnum ltx_tag_bibitem\">Yap et\u00a0al. (2020)</span>\n<span class=\"ltx_bibblock\">\nBoon\u00a0Peng Yap, Andrew Koh, and Eng\u00a0Siong Chng. 2020.\n\n</span>\n<span class=\"ltx_bibblock\">Adapting BERT for word sense disambiguation with gloss selection\nobjective and example sentences.\n\n</span>\n<span class=\"ltx_bibblock\">In <em class=\"ltx_emph ltx_font_italic\" id=\"bib.bib24.1.1\">Findings of the Association for Computational Linguistics:\nEMNLP 2020 (ACL)</em>, pages 41\u201346.\n\n</span>\n</li>\n</ul>\n</section>\n<div class=\"ltx_para\" id=\"S7.p1\">\n<p class=\"ltx_p\" id=\"S7.p1.1\"></p>\n</div>\n</section>\n</section>\n</section>\n</figure>",
|
| 80 |
+
"capture": "Table 5: Examples of predictions made by ContrastWSD and the baseline models on the VUA-20 testing dataset. \u2713 marks a metaphor prediction and \u2715 marks a literal prediction."
|
| 81 |
+
}
|
| 82 |
+
},
|
| 83 |
+
"image_paths": {
|
| 84 |
+
"1": {
|
| 85 |
+
"figure_path": "2309.03103v2_figure_1.png",
|
| 86 |
+
"caption": "Figure 1: The Metaphor Identification Procedure",
|
| 87 |
+
"url": "http://arxiv.org/html/2309.03103v2/extracted/5490045/images/MIP.png"
|
| 88 |
+
},
|
| 89 |
+
"2": {
|
| 90 |
+
"figure_path": "2309.03103v2_figure_2.png",
|
| 91 |
+
"caption": "Figure 2: ContrastWSD overall framework showing both stages: (i) the data augmentation stage and (ii) the metaphor detection stage.",
|
| 92 |
+
"url": "http://arxiv.org/html/2309.03103v2/extracted/5490045/images/WSD_MODEL.png"
|
| 93 |
+
}
|
| 94 |
+
},
|
| 95 |
+
"validation": true,
|
| 96 |
+
"references": [
|
| 97 |
+
{
|
| 98 |
+
"1": {
|
| 99 |
+
"title": "Miss roberta wilde: Metaphor identification using masked language\nmodel with wiktionary lexical definitions.",
|
| 100 |
+
"author": "Mateusz Babieno, Masashi Takeshita, Dusan Radisavljevic, Rafal Rzepka, and\nKenji Araki. 2022.",
|
| 101 |
+
"venue": "Applied Sciences, 12(4):2081.",
|
| 102 |
+
"url": null
|
| 103 |
+
}
|
| 104 |
+
},
|
| 105 |
+
{
|
| 106 |
+
"2": {
|
| 107 |
+
"title": "The berkeley framenet project.",
|
| 108 |
+
"author": "Collin F Baker, Charles J Fillmore, and John B Lowe. 1998.",
|
| 109 |
+
"venue": "In Proceedings of the 17th International Conference on\nComputational Linguistics (ICCL), pages 86\u201390.",
|
| 110 |
+
"url": null
|
| 111 |
+
}
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"3": {
|
| 115 |
+
"title": "Breaking through the 80% glass ceiling: Raising the state of the art\nin word sense disambiguation by incorporating knowledge graph information.",
|
| 116 |
+
"author": "Michele Bevilacqua and Roberto Navigli. 2020.",
|
| 117 |
+
"venue": "In Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics, pages 2854\u20132864.",
|
| 118 |
+
"url": null
|
| 119 |
+
}
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"4": {
|
| 123 |
+
"title": "A clustering approach for nearly unsupervised recognition of\nnonliteral language.",
|
| 124 |
+
"author": "Julia Birke and Anoop Sarkar. 2006.",
|
| 125 |
+
"venue": "In 11th Conference of the European Chapter of the Association\nfor Computational Linguistics, pages 329\u2013336.",
|
| 126 |
+
"url": null
|
| 127 |
+
}
|
| 128 |
+
},
|
| 129 |
+
{
|
| 130 |
+
"5": {
|
| 131 |
+
"title": "Comet: Commonsense transformers for automatic knowledge graph\nconstruction.",
|
| 132 |
+
"author": "Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli\nCelikyilmaz, and Yejin Choi. 2019.",
|
| 133 |
+
"venue": "In Proceedings of the 57th Annual Meeting of the Association\nfor Computational Linguistics, pages 4762\u20134779.",
|
| 134 |
+
"url": null
|
| 135 |
+
}
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"6": {
|
| 139 |
+
"title": "Modelling metaphor with attribute-based semantics.",
|
| 140 |
+
"author": "Luana Bulat, SC Clark, and Ekaterina Shutova. 2017.",
|
| 141 |
+
"venue": "Association for Computational Linguistics.",
|
| 142 |
+
"url": null
|
| 143 |
+
}
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"7": {
|
| 147 |
+
"title": "Go figure! multi-task transformer-based architecture for metaphor\ndetection using idioms: Ets team in 2020 metaphor shared task.",
|
| 148 |
+
"author": "Xianyang Chen, Chee Wee Leong, Michael Flor, and Beata Beigman Klebanov. 2020.",
|
| 149 |
+
"venue": "In Proceedings of the Second Workshop on Figurative Language\nProcessing, pages 235\u2013243.",
|
| 150 |
+
"url": null
|
| 151 |
+
}
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"8": {
|
| 155 |
+
"title": "Melbert: Metaphor detection via contextualized late interaction using\nmetaphorical identification theories.",
|
| 156 |
+
"author": "Minjin Choi, Sunkyung Lee, Eunseong Choi, Heesoo Park, Junhyuk Lee, Dongwon\nLee, and Jongwuk Lee. 2021.",
|
| 157 |
+
"venue": "In Proceedings of the 2021 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, pages 1763\u20131773.",
|
| 158 |
+
"url": null
|
| 159 |
+
}
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"9": {
|
| 163 |
+
"title": "Creative data generation: A review focusing on text and poetry.",
|
| 164 |
+
"author": "Mohamad Elzohbi and Richard Zhao. 2023.",
|
| 165 |
+
"venue": "In Proceedings of the 14th International Conference on\nComputational Creativity (ICCC), pages 29\u201338.",
|
| 166 |
+
"url": null
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"10": {
|
| 171 |
+
"title": "Metaphors we live by.",
|
| 172 |
+
"author": "George Lakoff and Mark Johnson. 2008.",
|
| 173 |
+
"venue": "University of Chicago press.",
|
| 174 |
+
"url": null
|
| 175 |
+
}
|
| 176 |
+
},
|
| 177 |
+
{
|
| 178 |
+
"11": {
|
| 179 |
+
"title": "A report on the 2020 vua and toefl metaphor detection shared task.",
|
| 180 |
+
"author": "Chee Wee Leong, Beata Beigman Klebanov, Chris Hamill, Egon Stemle, Rutuja\nUbale, and Xianyang Chen. 2020.",
|
| 181 |
+
"venue": "In Proceedings of the second workshop on figurative language\nprocessing, pages 18\u201329.",
|
| 182 |
+
"url": null
|
| 183 |
+
}
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"12": {
|
| 187 |
+
"title": "A report on the 2018 vua metaphor detection shared task.",
|
| 188 |
+
"author": "Chee Wee Leong, Beata Beigman Klebanov, and Ekaterina Shutova. 2018.",
|
| 189 |
+
"venue": "In Proceedings of the Workshop on Figurative Language\nProcessing, pages 56\u201366.",
|
| 190 |
+
"url": null
|
| 191 |
+
}
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"13": {
|
| 195 |
+
"title": "Sensebert: Driving some sense into bert.",
|
| 196 |
+
"author": "Yoav Levine, Barak Lenz, Or Dagan, Ori Ram, Dan Padnos, Or Sharir, Shai\nShalev-Shwartz, Amnon Shashua, and Yoav Shoham. 2020.",
|
| 197 |
+
"venue": "In Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics, pages 4656\u20134667.",
|
| 198 |
+
"url": null
|
| 199 |
+
}
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"14": {
|
| 203 |
+
"title": "Framebert: Conceptual metaphor detection with frame embedding\nlearning.",
|
| 204 |
+
"author": "Yucheng Li, Shun Wang, Chenghua Lin, Frank Guerin, and Lo\u00efc Barrault.\n2023.",
|
| 205 |
+
"venue": "In Proceedings of the 17th Conference of the European Chapter\nof the Association for Computational Linguistics, pages 1550\u20131555.",
|
| 206 |
+
"url": null
|
| 207 |
+
}
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"15": {
|
| 211 |
+
"title": "Word embedding and wordnet based metaphor identification and\ninterpretation.",
|
| 212 |
+
"author": "Rui Mao, Chenghua Lin, and Frank Guerin. 2018.",
|
| 213 |
+
"venue": "In Proceedings of the 56th Annual Meeting of the Association\nfor Computational Linguistics (Volume 1: Long Papers), pages 1222\u20131231.",
|
| 214 |
+
"url": null
|
| 215 |
+
}
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"16": {
|
| 219 |
+
"title": "Metaphorical polysemy detection: Conventional metaphor meets word\nsense disambiguation.",
|
| 220 |
+
"author": "Rowan Hall Maudslay and Simone Teufel. 2022.",
|
| 221 |
+
"venue": "In Proceedings of the 29th International Conference on\nComputational Linguistics (ICCL), pages 65\u201377.",
|
| 222 |
+
"url": null
|
| 223 |
+
}
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"17": {
|
| 227 |
+
"title": "Metaphor as a medium for emotion: An empirical study.",
|
| 228 |
+
"author": "Saif Mohammad, Ekaterina Shutova, and Peter Turney. 2016.",
|
| 229 |
+
"venue": "In Proceedings of the Fifth Joint Conference on Lexical and\nComputational Semantics, pages 23\u201333.",
|
| 230 |
+
"url": null
|
| 231 |
+
}
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"18": {
|
| 235 |
+
"title": "Mip: A method for identifying metaphorically used words in discourse.",
|
| 236 |
+
"author": "P Pragglejaz Group. 2007.",
|
| 237 |
+
"venue": "Metaphor and symbol, 22(1):1\u201339.",
|
| 238 |
+
"url": null
|
| 239 |
+
}
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"19": {
|
| 243 |
+
"title": "Verb metaphor detection via contextual relation learning.",
|
| 244 |
+
"author": "Wei Song, Shuhui Zhou, Ruiji Fu, Ting Liu, and Lizhen Liu. 2021.",
|
| 245 |
+
"venue": "In Proceedings of the 59th Annual Meeting of the Association\nfor Computational Linguistics and the 11th International Joint Conference on\nNatural Language Processing (Volume 1: Long Papers), pages 4240\u20134251.",
|
| 246 |
+
"url": null
|
| 247 |
+
}
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"20": {
|
| 251 |
+
"title": "A method for linguistic metaphor identification.",
|
| 252 |
+
"author": "Gerard Steen, Aletta G Dorst, J Berenike Herrmann, Anna Kaal, Tina Krennmayr,\nTrijntje Pasma, et al. 2010.",
|
| 253 |
+
"venue": "Converging Evidence in Language and Communication Research.\nJohn Benjamins, Amsterdam, 14.",
|
| 254 |
+
"url": null
|
| 255 |
+
}
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"21": {
|
| 259 |
+
"title": "DeepMet: A reading comprehension paradigm for token-level\nmetaphor detection.",
|
| 260 |
+
"author": "Chuandong Su, Fumiyo Fukumoto, Xiaoxi Huang, Jiyi Li, Rongbo Wang, and Zhiqun\nChen. 2020.",
|
| 261 |
+
"venue": "In Proceedings of the Second Workshop on Figurative Language\nProcessing (ACL), pages 30\u201339.",
|
| 262 |
+
"url": null
|
| 263 |
+
}
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"22": {
|
| 267 |
+
"title": "Enhancing metaphor detection by gloss-based interpretations.",
|
| 268 |
+
"author": "Hai Wan, Jinxia Lin, Jianfeng Du, Dawei Shen, and Manrong Zhang. 2021.",
|
| 269 |
+
"venue": "In Findings of the Association for Computational Linguistics:\nACL-IJCNLP 2021, pages 1971\u20131981.",
|
| 270 |
+
"url": null
|
| 271 |
+
}
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"23": {
|
| 275 |
+
"title": "Neural metaphor detecting with cnn-lstm model.",
|
| 276 |
+
"author": "Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, and Yongfeng Huang.\n2018.",
|
| 277 |
+
"venue": "In Proceedings of the workshop on figurative language\nprocessing, pages 110\u2013114.",
|
| 278 |
+
"url": null
|
| 279 |
+
}
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"24": {
|
| 283 |
+
"title": "Adapting BERT for word sense disambiguation with gloss selection\nobjective and example sentences.",
|
| 284 |
+
"author": "Boon Peng Yap, Andrew Koh, and Eng Siong Chng. 2020.",
|
| 285 |
+
"venue": "In Findings of the Association for Computational Linguistics:\nEMNLP 2020 (ACL), pages 41\u201346.",
|
| 286 |
+
"url": null
|
| 287 |
+
}
|
| 288 |
+
}
|
| 289 |
+
],
|
| 290 |
+
"url": "http://arxiv.org/html/2309.03103v2"
|
| 291 |
+
}
|
20240323/2309.04937v3.json
ADDED
|
@@ -0,0 +1,250 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "LONER: LiDAR Only Neural Representations for Real-Time SLAM",
|
| 3 |
+
"abstract": "This paper proposes LONER, the first real-time LiDAR SLAM algorithm that uses a neural implicit scene representation.\nExisting implicit mapping methods for LiDAR show promising results in large-scale reconstruction, but either require groundtruth poses or run slower than real-time.\nIn contrast, LONER uses LiDAR data to train an MLP to estimate a dense map in real-time, while simultaneously estimating the trajectory of the sensor. To achieve real-time performance, this paper proposes a novel information-theoretic loss function that accounts for the fact that different regions of the map may be learned to varying degrees throughout online training. The proposed method is evaluated qualitatively and quantitatively on two open-source datasets.\nThis evaluation illustrates that the proposed loss function converges faster and leads to more accurate geometry reconstruction than other loss functions used in depth-supervised neural implicit frameworks.\nFinally, this paper shows that LONER estimates trajectories competitively with state-of-the-art LiDAR SLAM methods, while also producing dense maps competitive with existing real-time implicit mapping methods that use groundtruth poses.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Neural implicit scene representations, such as Neural Radiance Fields (NeRFs), offer a promising new way to represent maps for robotics applications [1 ###reference_b1###].\nTraditional NeRFs employ a Multi-Layer Perceptron (MLP) to estimate the radiance and volume density of each point in space, enabling dense scene reconstruction and novel view synthesis.\nThe learned scene representation has several advantages over conventional map representations, such as point clouds and occupancy grids.\nFirst, because the domain of the NeRF is continuous and does not enforce discretization, any point in the scene can be queried for occupancy.\nThe continuity of the scene can be exploited to solve a variety of robotics problems.\nFor example, as demonstrated in [2 ###reference_b2###], a motion planner can integrate the volume density along a proposed trajectory to evaluate the likelihood of a collision.\nOther benefits include the ability to produce realistic renders of the scene [1 ###reference_b1###].\nFurther, NeRFs can be used to estimate uncertainty of renders to enable view selection for active exploration [3 ###reference_b3###].\nThis paper advances neural implicit scene representations for robotics applications. Specifically, we introduce the first real-time LiDAR-only SLAM algorithm that achieves accurate pose estimation and map reconstruction and learns a neural implicit representation of a scene.\n###figure_1### Several recent papers have proposed real-time NeRF-based visual SLAM systems using monocular or RGB-D cameras [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###]. These systems demonstrate impressive performance on indoor scenes. For outdoor environments, prior work has focused on using neural implicit representations for LiDAR to enable dense 3D reconstruction and novel view synthesis for large-scale scenes [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###].\nRecent methods have even shown promising results for LiDAR localization and mapping with neural implicit frameworks in large-scale outdoor scenes [11 ###reference_b11###, 12 ###reference_b12###].\nStill, these LiDAR-supervised algorithms do not operate in real-time, which is necessary for robotics applications.\nThe contributions of this paper are as follows:\nWe propose the first real-time neural implicit LiDAR SLAM method, which adapts to outdoor environments and provides accurate online state estimation.\nWe introduce a novel loss function that leads to faster convergence and more accurate reconstruction than existing loss functions.\nWe demonstrate that our proposed method, LONER, runs in real-time and estimates both trajectories and maps more accurately than baselines. Figure 1 ###reference_### shows the reconstruction results on the Fusion Portable dataset [4 ###reference_b4###]. A project page is available at https://umautobots.github.io/loner ###reference_###.\nThe remainder of this paper is organized as follows: In Section II ###reference_###, we review related work. In Section III ###reference_###, we describe LONER. In Section IV ###reference_###, we evaluate LONER, and in Section V ###reference_###, we conclude and discuss both limitations and future work."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related Works",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A LiDAR SLAM",
|
| 21 |
+
"text": "LiDAR SLAM has been an active research area over the past several decades [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###]. The primary goal of these methods is to estimate the trajectory of the ego vehicle. Modern methods such as LeGO-LOAM estimate motion by aligning features extracted from consecutive scans, then accumulate LiDAR scans to build a map [13 ###reference_b13###, 15 ###reference_b15###].\nThese works primarily focus on accurate trajectory estimation, and thus creating dense, realistic maps is not a focus of the approach. In contrast, our method aims to achieve similar or better trajectory estimation while also estimating dense maps."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Real-time NeRF-based SLAM",
|
| 27 |
+
"text": "NeRFs use images of a scene captured from known camera poses to train an MLP to predict what the scene will look like from novel poses [1 ###reference_b1###].\nWhile originally developed for offline use with known camera poses, NeRFs have recently been used to learn an implicit scene representation in RGB and RGB-D SLAM frameworks [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###].\nBy representing the scene with a NeRF, these algorithms perform both tracking and mapping via gradient descent on an MLP or related feature vectors.\nFor example, iMAP represents the scene as a single MLP [5 ###reference_b5###].\nEach RGB-D frame is first tracked by fixing the MLP weights and optimizing the camera poses.\nThen, the new information is incorporated into the map by jointly optimizing the MLP and pose estimates.\niMAP shows promising results in small scenarios but does not scale to larger or outdoor scenes.\nNICE-SLAM replaces iMAP\u2019s simple MLP with a hierarchical feature grid combined with MLP decoders [6 ###reference_b6###].\nThis approach demonstrates better scalability than the single MLP used in iMAP, but NICE-SLAM still only works in indoor scenarios.\nAdditionally, NeRF-SLAM uses DROID-SLAM [18 ###reference_b18###] as the tracking front-end, which allows them to use a probabilistic volumetric NeRF to perform uncertainty-aware mapping and pose refinement [7 ###reference_b7###].\nRecently, several more papers have introduced architectures and encodings to improve neural-implicit SLAM\u2019s memory efficiency, computation speed, and accuracy [19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###].\nOur method extends these recent advances to leverage implicit scene representation for real-time LiDAR-only SLAM, which allows operation in large, outdoor environments."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "II-C Neural Implicit Representations for LiDAR",
|
| 33 |
+
"text": "While neural implicit representations were initially developed for visual applications, several works have introduced neural implicit representations for LiDAR to improve outdoor 3D reconstruction performance [8 ###reference_b8###, 23 ###reference_b23###, 9 ###reference_b9###].\nUrban Radiance Fields (URF) is an early example of LiDAR-integraded NeRF [8 ###reference_b8###].\nURF uses a novel Line-of-Sight (LOS) loss to improve LiDAR supervision. CLONeR uses LiDAR and camera data to train two decoupled MLPs, one of which learns scene structure and the other of which learns scene color [9 ###reference_b9###].\nCLONeR combines the decoupled NeRF with occupancy-grid enabled sampling heuristics and URF\u2019s Line-of-Sight loss to enable training with as few as two input views [9 ###reference_b9###].\nBoth URF and CLONeR require known sensor poses and assume offline training.\nIn contrast, our proposed method performs real-time LiDAR SLAM that both reconstructs 3D environments and estimates sensor poses for sequential input data.\nIn [12 ###reference_b12###], a method is introduced that inputs LiDAR scans and approximate poses, then uses a novel occlusion-aware loss function to jointly optimize the poses and a NeRF.\nThis work assumes a-priori availability of all data.\nThus, it can be effectively viewed as a LiDAR-based structure-from-motion algorithm, whereas we present a full SLAM algorithm.\nRecently, SHINE Mapping presented a LiDAR mapping method based on neural signed distance function (SDF) and sparse feature embedding [10 ###reference_b10###].\nWhile this embedding helps scale to large scenes, in a real-time configuration, it presents a trade-off between hole-filling and overall map resolution. Our method instead uses a dense feature embedding, which enables improved performance across both hole-filling capability and map resolution.\nNeRF-LOAM extends this to a LiDAR SLAM system and proposes a dynamic voxel embedding generation strategy to adapt to large-scale scenarios [11 ###reference_b11###]. However, it does not operate in real-time.\n###figure_2###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "II-D Loss for Depth-supervised NeRF",
|
| 39 |
+
"text": "Depth-supervised NeRF frameworks, such as those that use RGB-D sensors, typically use the difference between rendered and sensed depth as a loss to learn geometry from 2D images by volumetric rendering [5 ###reference_b5###, 6 ###reference_b6###]. Other works use depth measurements directly in 3D space to perform depth-supervision [23 ###reference_b23###, 8 ###reference_b8###, 9 ###reference_b9###, 12 ###reference_b12###].\nThe Binary Cross-Entropy (BCE) loss proposed in [12 ###reference_b12###] reasons about occluded objects, but does not consider measurement uncertainty.\nThe KL divergence loss presented by DS-NeRF [23 ###reference_b23###] and Line-Of-Sight (LOS) loss introduced by URF [8 ###reference_b8###] approximate each LiDAR ray\u2019s termination depth as a normal distribution centered at the measured depth.\nThe variance of the distribution is correlated with a margin parameter .\nThe loss functions encourage the network to predict weights along a ray equal to the PDF of the normal distribution.\nWhile the KL loss leaves the variance fixed during training, [8 ###reference_b8###] shows that decaying during training improves reconstruction accuracy when using the LOS loss.\nWhile uniformly decaying a margin is successful offline, using a single margin for all rays is unsuitable for real-time SLAM, which has incremental input and limited training samples.\nUsing a uniform margin can force the NeRF model to forget learned geometry when adding a new LiDAR scan and can cause slower convergence.\nTherefore, this paper proposes a novel dynamic margin loss that applies a different margin for each ray.\nWe demonstrate the proposed loss function leads to better 3D reconstruction than previous loss functions within fewer training samples, and enables real-time performance."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "III Method",
|
| 45 |
+
"text": "This section provides a high-level overview of our proposed system, LONER, before explaining each component in detail."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-A System Overview",
|
| 51 |
+
"text": "An overview of LONER is shown in Fig. 2 ###reference_###. As is common in the SLAM literature [5 ###reference_b5###, 6 ###reference_b6###, 24 ###reference_b24###], the system comprises parallel threads for tracking and mapping. The tracking thread processes incoming scans and estimates odometry using ICP. LONER is designed for use without an IMU, so ICP uses the identity transformation as an initial guess. In parallel and at a lower rate, the mapping thread uses the current scan and selected prior scans as KeyFrames, which are used to update the training of the neural scene representation."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-B Tracking",
|
| 57 |
+
"text": "Incoming LiDAR scans are decimated to a fixed frequency of 5Hz. The relative transform from the previous scan to the current scan is estimated using Point-to-Plane ICP [25 ###reference_b25###]. We use ICP instead of inverse-Nerf since it offers good real-time performance while allowing GPU resources to be dedicated to the mapping module. The ICP estimate is later refined in our mapping optimization. The LiDAR pose is then estimated as . Given the previous and current pose, LiDAR scans are motion-compensated by assuming constant velocity motion between scans."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.3",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "III-C Implicit Map Representation",
|
| 63 |
+
"text": "The scene is represented as an MLP with the hierarchical feature grid encoding from [26 ###reference_b26###]. During online training, the parameters of the MLP and the feature grid are updated to predict the volume density of each point in space. To train the network and estimate depths, we follow the standard volumetric rendering procedure [1 ###reference_b1###]. In particular, for a LiDAR ray with origin and direction , we choose distances to create samples . LiDAR intrinsics dictate , while depends on the scale of the scene. The feature grid and MLP, collectively , are queried to predict the occupancy state . Then, weights transmittances and weights are computed according to:\nwhere , and is the density at sample predicted by the MLP. The weights are used by the loss function and represent the probability that the ray terminates at each point. Therefore, the expected termination depth of a ray can be estimated as"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.4",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "III-D Mapping",
|
| 69 |
+
"text": "The mapping thread receives LiDAR scans from the tracking thread and determines whether to form a KeyFrame. If the scan is accepted, the map is jointly optimized with the poses."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.4.1",
|
| 73 |
+
"parent_section_id": "3.4",
|
| 74 |
+
"section_name": "III-D1 KeyFrames",
|
| 75 |
+
"text": "KeyFrames are selected temporally: if has passed since the previous KeyFrame, a new KeyFrame is added. Each time a KeyFrame is accepted, the optimizer is updated. total KeyFrames are used in the update, including the current KeyFrame and random selected past KeyFrames."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.4.2",
|
| 79 |
+
"parent_section_id": "3.4",
|
| 80 |
+
"section_name": "III-D2 Optimization",
|
| 81 |
+
"text": "Once the window of KeyFrames has been selected, the map is jointly optimized with the poses of KeyFrames in the optimization window. For a KeyFrame with estimated pose in the world frame, a twist vector is formed to be used as the optimization variable. Specifically, where is the axis-angle representation of the rotation component of , and is the translation component. In the forward pass, this vector is converted back into a pose and used to compute the origin of rays. rays are sampled at random from the LiDAR scan, and depth samples are taken from each ray using the occupancy grid heuristic introduced by [9 ###reference_b9###].\nIn the backward pass, gradients are computed for MLP and feature grid parameters and twist vectors . At the end of the optimization, the optimized twist vectors are converted into transformation matrices . The tracking thread is informed of this change, such that future tracking is performed relative to the optimized poses.\n###figure_3###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "3.5",
|
| 85 |
+
"parent_section_id": "3",
|
| 86 |
+
"section_name": "III-E JS Dynamic Margin Loss Function",
|
| 87 |
+
"text": "The primary loss function in our system is a novel dynamic margin loss. This is combined with terms for depth loss and sky loss as follows:\nEach of these terms is explained below."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "3.5.1",
|
| 91 |
+
"parent_section_id": "3.5",
|
| 92 |
+
"section_name": "III-E1 JS Loss Formulation",
|
| 93 |
+
"text": "The LOS loss used by [8 ###reference_b8###, 9 ###reference_b9###] uses a single margin for all rays; we use a similar formulation but introduce a novel strategy based on the Jensen-Shannon Divergence [27 ###reference_b27###] to assign a unique margin to each ray. For a given LiDAR ray , the samples along the ray are , and denotes the measured depth along the ray. denotes the distance of individual training samples along the ray, and represents a corresponding weight prediction from an MLP, as defined in Equation 2 ###reference_###. We define a truncated Gaussian distribution that has a bounded domain parameterized by margin , with as the training distribution. Thus, target weights are given by . The JS loss is defined as\nwhere the opacity loss (explained in more detail by [9 ###reference_b9###]) ensures weights along each ray sum to one and thus form a probability distribution. Note that while URF [8 ###reference_b8###] uses an L2 loss to compute the LOS loss, we follow [9 ###reference_b9###] and use an L1 loss. The effect of this is discussed in Section IV-D ###reference_###.\nIn [8 ###reference_b8###, 9 ###reference_b9###], the margin decays exponentially throughout training and, at each iteration, a single margin is shared by all of the rays. In contrast, we present a JS divergence-based dynamic margin that computes a unique margin for each ray to improve the training convergence and reconstruction accuracy.\nIn a SLAM application, continuous optimization, sparse sampling, and incremental input lead to different regions of the map being learned to varying degrees during online training.\nAs shown in Fig. 3 ###reference_###, using a uniform in the LOS loss causes forgetting in regions that have already been learned. The idea of the JS dynamic margin is to use a larger margin for rays pointing toward regions of the map with unknown geometry while using a smaller margin for rays pointing toward well-learned regions. This allows the system to learn new regions while preserving and refining learned geometry.\nWe use the JS divergence to measure the dissimilarity between the goal distribution and the sample distribution for each ray, which represents how well the map has learned along the ray. Learned regions have similar goal and sample distributions, which lead to smaller JS divergence.\nWe define a goal distribution , where . Further, we define the sample distribution , where and denote mean and standard deviation of the predicted weights along a particular ray. The dynamic margin is then defined as\nwhere is a constant scaling parameter. denotes the upper bound of the JS score, and denotes a threshold for scaling. Once the JS score is smaller than , is equal to ."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "3.5.2",
|
| 97 |
+
"parent_section_id": "3.5",
|
| 98 |
+
"section_name": "III-E2 Depth Loss",
|
| 99 |
+
"text": "As in [8 ###reference_b8###], we use the depth loss as an additional term in the loss function. The depth loss is the error between rendered depth and LiDAR-measured depth along each ray. The loss is defined as\nWe found the depth loss contributes to blurry reconstruction with limited training time, but still provides good hole-filling, as shown in Fig. 6 ###reference_###.\nHence, unlike [8 ###reference_b8###] which weights depth loss and LOS loss equally, we down-weight the depth loss by setting ."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "3.5.3",
|
| 103 |
+
"parent_section_id": "3.5",
|
| 104 |
+
"section_name": "III-E3 Sky Loss",
|
| 105 |
+
"text": "Similar to [8 ###reference_b8###], we add an additional loss to force weights on rays pointing at the sky to be zero.\nWhile [8 ###reference_b8###] segments the sky with camera-based semantic segmentation, we determine sky regions by observing holes in the LiDAR scans. First, each scan is converted to a depth image. This is then filtered via a single dilate and erode. Any points which remain empty reflect regions of the LiDAR scan where no return was received. If the ray corresponding to each of these points has a positive elevation angle in the global frame, it is determined to point to the sky. Thus, this heuristic works as long as the LiDAR is approximately level during initialization. For sky rays, the opacity loss is not enforced. Then, for all sky rays, the following loss is computed:"
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "3.6",
|
| 109 |
+
"parent_section_id": "3",
|
| 110 |
+
"section_name": "III-F Meshing",
|
| 111 |
+
"text": "To form a mesh from the implicit geometry, a virtual LiDAR is placed at estimated KeyFrame poses. We compute weights along LiDAR rays, then bucket the weights into a 3D grid. When multiple weights fall within the same grid cell, the maximum value is kept. Marching cubes is then used to form a mesh from the result. This process runs offline for visualization and evaluation, and is not a part of online training."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4",
|
| 115 |
+
"parent_section_id": null,
|
| 116 |
+
"section_name": "IV Experiments",
|
| 117 |
+
"text": "This section evaluates the trajectory estimation and mapping accuracy of LONER against state-of-the-art baselines. We further evaluate the choice of loss function and perform ablation studies over key features."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.1",
|
| 121 |
+
"parent_section_id": "4",
|
| 122 |
+
"section_name": "IV-A Implementation Details",
|
| 123 |
+
"text": "Table I ###reference_### provides parameters used for evaluation of LONER, which we found to generalize well across the tested datasets. All values were tuned experimentally to maximize performance while maintaining real-time operation. For all experiments, each method and configuration was run 5 times and we report the median result, as in [24 ###reference_b24###]. The complete data from our evaluations is available on the project webpage.\n###table_1###"
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "4.2",
|
| 127 |
+
"parent_section_id": "4",
|
| 128 |
+
"section_name": "IV-B Baselines",
|
| 129 |
+
"text": "We evaluate against NICE-SLAM [6 ###reference_b6###] and LeGO-LOAM [15 ###reference_b15###], which represent state-of-the-art methods in neural-implicit SLAM and LiDAR SLAM respectively.\nAdditionally, we evaluate our SLAM pipeline with the loss functions from CLONeR [9 ###reference_b9###] and URF [8 ###reference_b8###]. We refer to these approaches as \u201cLONER w./ \u201d and \u201cLONER w./ \u201d respectively. Finally, mapping performance is compared to SHINE mapping, which is run with groundtruth poses [10 ###reference_b10###]. Since NeRF-LOAM [11 ###reference_b11###] is a recent work and code is not yet available, it is excluded from this evaluation in favor of SHINE. Note that NeRF-LOAM does not operate in real-time."
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "4.3",
|
| 133 |
+
"parent_section_id": "4",
|
| 134 |
+
"section_name": "IV-C Datasets",
|
| 135 |
+
"text": "We evaluate performance on two open source datasets, Fusion Portable [4 ###reference_b4###] and Newer College [28 ###reference_b28###]. Collectively, the chosen sequences represent a range of scales and difficulties.\nFrom Fusion Portable, we select three scenes. The first sequence is MCR Slow 01, which is a small indoor lab scene collected on a quadruped. The others are Canteen Day and Garden Day, which are medium-scale semi-outdoor courtyard areas collected on a handheld platform. Both sequences contain few dynamic objects, as handling dynamic objects is left to future work. From Newer College, we evaluate on the Quad Easy sequence, which consists of two laps of a large outdoor college quad area. Because Newer College has monochrome fisheye cameras, it is incompatible with NICE-SLAM. Hence, NICE-SLAM is excluded from the Newer College results.\nNote that the sequences used in testing do not have RGB-D sensors. Hence, we instead simulate RGB-D from stereo offline using RAFT optical flow estimation [29 ###reference_b29###], and run NICE-SLAM on the result. NICE-SLAM can fail to converge in these semi-outdoor scenarios in a real-time configuration, so we increased the number of samples and iterations to improve performance. We ran NICE-SLAM for 350 iterations per KeyFrame, used samples per ray, and selected a KeyFrame every 5 frames. This results in offline runtime performance. To bound the computational complexity, we set the middle and fine grid sizes to 0.64m and 0.32m respectively."
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "4.4",
|
| 139 |
+
"parent_section_id": "4",
|
| 140 |
+
"section_name": "IV-D Performance Analysis",
|
| 141 |
+
"text": ""
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "4.4.1",
|
| 145 |
+
"parent_section_id": "4.4",
|
| 146 |
+
"section_name": "IV-D1 Trajectory Tracking Evaluation",
|
| 147 |
+
"text": "###table_2### Trajectory estimates from each algorithm are evaluated according to the procedure described in [30 ###reference_b30###]. We use an open-source package for these evaluations111https://github.com/MichaelGrupp/evo. Trajectories are aligned, then, the root-mean-squared absolute pose error is computed and referred to as .\nTable II ###reference_### compares trajectory performance to state-of-the-art methods [15 ###reference_b15###, 6 ###reference_b6###]. Our method offers performance competitive with or better than existing state-of-the-art LiDAR SLAM. On the evaluated scenes, we outperform LeGO-LOAM except for on Newer College Quad, which is the largest and most open sequence. Even on Quad, our estimated trajectory is within millimeters of the LeGO-LOAM result. Additionally, unlike LeGO-LOAM, our method creates dense implicit maps of the scene.\nOn the MCR sequence, NICE-SLAM successfully estimated a trajectory four of five times. The resulting trajectories were reasonable, but not competitive with the LiDAR-based methods. On the other larger sequences, NICE-SLAM failed to track the scene. This reflects that NICE-SLAM was developed for small indoor scenarios and does not scale to more challenging scenes.\nLONER with the CLONeR loss achieves trajectory accuracy similar to LONER on some sequences. However, it consistently performs worse, especially in Quad.\nWe found that the URF loss degrades the performance of LONER\nWe also tested LONER with the KL loss proposed by DS-NeRF [23 ###reference_b23###]. However, it crashes when the initial pose estimation is poor because using fixed uncertainty for the goal distribution can cause numerical instability in the SLAM context."
|
| 148 |
+
},
|
| 149 |
+
{
|
| 150 |
+
"section_id": "4.4.2",
|
| 151 |
+
"parent_section_id": "4.4",
|
| 152 |
+
"section_name": "IV-D2 Reconstruction Evaluation",
|
| 153 |
+
"text": "###table_3### To evaluate maps, point clouds are created by first generating a mesh, then sampling a point cloud from the mesh. To bound the size of the generated point clouds, all maps (estimated and groundtruth) are downsampled to a voxel grid size of 5cm, except for the small MCR sequence, which uses a voxel grid size of 1cm. Finally, because groundtruth maps may extend beyond the field-of-view of the sensor used to collect each sequence, we crop each groundtruth map to the geometry observed by the sensor during data collection.\nMap metrics include accuracy (mean distance from each point in the estimated map to each point in the groundtruth map) and completion (mean distance from each point in the groundtruth map to each point in the estimated map) [5 ###reference_b5###, 6 ###reference_b6###].\nAdditionally, precision and recall are computed with a 0.1m threshold.\nTable III ###reference_### shows quantitative evaluation for map reconstruction performance. LONER performs competitively with or better than the baselines in all tests. LONER and SHINE Mapping out-perform the other baselines. Qualitatively, Fig. 5 ###reference_### shows that SHINE and LONER estimate the most accurate maps. SHINE estimates more complete geometry, while LONER recovers finer detail and produces maps with fewer artifacts.\n###figure_4### ###figure_5###"
|
| 154 |
+
},
|
| 155 |
+
{
|
| 156 |
+
"section_id": "4.5",
|
| 157 |
+
"parent_section_id": "4",
|
| 158 |
+
"section_name": "IV-E Runtime",
|
| 159 |
+
"text": "Runtime performance was evaluated on a computer with an AMD Ryzen 5950X CPU and an NVidia A6000 GPU, which is similar to the platform used to benchmark NICE-SLAM [6 ###reference_b6###]. Each tracking step takes an average of 14ms to compute, which is faster than is needed by the 5Hz configuration. The map is updated continuously throughout operation, with 50 iterations allocated per KeyFrame and one KeyFrame added every 3 seconds. When run in parallel with the tracker, the average time to perform these 50 iterations is 2.79 seconds, or approximately 56ms per iteration. Hence, the map is updated at approximately 18Hz, and the system finishes processing a KeyFrame in under the 3 seconds allotted per KeyFrame. This ensures the system can keep up with input sensor data."
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"section_id": "4.6",
|
| 163 |
+
"parent_section_id": "4",
|
| 164 |
+
"section_name": "IV-F Loss Function Performance",
|
| 165 |
+
"text": "We evaluate each component of the loss function in isolation. In addition to the JS dynamic margin loss, we evaluate an LOS loss with three different exponential decay rates: 0.99 (Slow), 0.95 (Medium), and 0.85 (Fast). Finally, we consider the depth loss in isolation, as is used in [6 ###reference_b6###, 5 ###reference_b5###]. As a qualitative comparison of mapping quality, depth images rendered from each configuration are shown in Fig. 6 ###reference_###. The proposed JS loss shows the most complete and detailed reconstruction of the tested configurations.\nAdditionally, Fig. 4 ###reference_### demonstrates that JS loss converges faster than other losses. In this experiment, we evaluate the convergence of each function when training on a single scan in simulated data. We use the CARLA simulator, where we obtain groundtruth depth images for evaluations. We compute the mean squared error of rendered depth images throughout training to show convergence performance. The results show that our JS loss converges faster than other losses."
|
| 166 |
+
},
|
| 167 |
+
{
|
| 168 |
+
"section_id": "4.7",
|
| 169 |
+
"parent_section_id": "4",
|
| 170 |
+
"section_name": "IV-G Ablation Study",
|
| 171 |
+
"text": "This section describes the ablation studies over key components of the SLAM framework and the loss function. To compare maps in the ablations, we evaluate the L1 depth loss by comparing rendered depth to depth measured by the LiDAR. This is analogous to the L1 Depth metric commonly used in NeRF frameworks [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###]. We compute the L1 depth across 25 randomly selected scans and report the mean value.\n###figure_6###"
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"section_id": "4.7.1",
|
| 175 |
+
"parent_section_id": "4.7",
|
| 176 |
+
"section_name": "IV-G1 SLAM Framework",
|
| 177 |
+
"text": "###table_4### In Table IV ###reference_###, we compare the impact of three changes to the SLAM framework. Disabling pose optimization is confirmed to strongly impact localization and mapping. Replacing the random KeyFrame selection with either the most recent or recent KeyFrames and randomly selected KeyFrames generally reduces performance. Finally, on the outdoor dataset, disabling sky segmentation has little effect on localization but degrades reconstruction accuracy."
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"section_id": "4.7.2",
|
| 181 |
+
"parent_section_id": "4.7",
|
| 182 |
+
"section_name": "IV-G2 Loss Function",
|
| 183 |
+
"text": "###table_5### Finally, we consider disabling features of the proposed loss function, which includes both the JS loss and the depth loss. In Table V ###reference_###, we evaluate using only depth loss, depth loss, and LOS loss with the fixed medium decay rate, LOS loss with dynamic margin and no depth loss, and the full system. The results demonstrate that the proposed system performs best in all metrics on all datasets."
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"section_id": "5",
|
| 187 |
+
"parent_section_id": null,
|
| 188 |
+
"section_name": "Conclusions and future work",
|
| 189 |
+
"text": "This paper proposed LONER, the first real-time LiDAR SLAM algorithm with an implicit neural map representation. To achieve SLAM in real-time, we presented a novel loss function for depth-supervised training. Results demonstrated that the JS loss outperforms current loss functions in both reconstruction accuracy and hole-filling while maintaining low computational costs. By testing this method on public datasets, we demonstrated that LONER achieves state-of-the-art map and trajectory quality, while providing an implicit geometry representation to support novel view depth rendering.\nThere are several avenues of future work to continue improving LONER. First, adding RGB data without compromising runtime performance would aid in the realism of reconstructions. Additionally, considering alternate input feature embeddings and ray selection heuristics could improve the ability of LONER to operate in city-scale scenarios. Further, inertial data could help the system track accurately under rapid rotation and in feature-sparse scenarios, where the LiDAR data is less informative. Finally, to function in highly dynamic environments, more work is needed to handle dynamic objects in the scene."
|
| 190 |
+
}
|
| 191 |
+
],
|
| 192 |
+
"appendix": [],
|
| 193 |
+
"tables": {
|
| 194 |
+
"1": {
|
| 195 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Parameters for LONER.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.9\">\n<tr class=\"ltx_tr\" id=\"S4.T1.9.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.9.10.1\">Description</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.9.10.2\">Symbol</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.9.10.3\">Value</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.1.2\">Time per KeyFrame</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.1\">\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.3\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.2.2.2\">KeyFrame Window Size</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.1\">\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.3\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.3.3.2\">Rays per KeyFrame</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.1\">\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.3.3.3\">512</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.4.4.2\">Samples per Ray</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.1\">\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.3\">512</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.5.5.2\">Min Depth Margin</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.5.1\">\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.5.5.3\">0.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.6.6.2\">JS Scale Hyperparameter</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.1\">\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.6.6.3\">1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.7.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.7.7.2\">Min/Max JS Divergence</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.7.7.1\">\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.7.7.3\">1, 10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.9.9.3\">Loss Coefficients</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.8.8.1\">\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.9.9.2\">\n, 1</td>\n</tr>\n</table>\n</figure>",
|
| 196 |
+
"capture": "TABLE I: Parameters for LONER."
|
| 197 |
+
},
|
| 198 |
+
"2": {
|
| 199 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Pose tracking results on Fusion Portable and Newer College sequences. Reported metric is RMS APE (m). An \u2717 indicates the algorithm failed.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T2.2\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.3\">\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.3.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.3.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.2.1\">MCR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.3.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.3.1\">Canteen</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.3.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.4.1\">Garden</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.3.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.3.5.1\">Quad</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.2.4.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.4.1.1\">LeGO-LOAM</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.4.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.052</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.4.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.129</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.4.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.161</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.4.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.4.5.1\">0.126</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.2.5.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.5.1.1\">NICE-SLAM</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.5.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.248</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.5.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.5.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">\u2717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.5.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.1\">LONER w./ </span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.047</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.952</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.928</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.931</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.2.2.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.1.1\">LONER w./ </span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.034</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.071</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.073</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.306</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.2.6.1\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.6.1.1\">LONER</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T2.2.6.2\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.6.2.1\">0.029</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T2.2.6.3\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.6.3.1\">0.064</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T2.2.6.4\" style=\"padding-left:3.5pt;padding-right:3.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.6.4.1\">0.056</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T2.2.6.5\" style=\"padding-left:3.5pt;padding-right:3.5pt;\">0.130</td>\n</tr>\n</table>\n</figure>",
|
| 200 |
+
"capture": "TABLE II: Pose tracking results on Fusion Portable and Newer College sequences. Reported metric is RMS APE (m). An \u2717 indicates the algorithm failed."
|
| 201 |
+
},
|
| 202 |
+
"3": {
|
| 203 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Comparison of map Acccuracy (m), Completion (m), Precision, and Recall between proposed and baseline algorithms. Unlike the others, SHINE used ground truth poses. A \u2018-\u2019 indicates invalid configurations, while \u2018\u2717\u2019 indicates that the algorithm failed.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T3.2\">\n<tr class=\"ltx_tr\" id=\"S4.T3.2.3\">\n<td class=\"ltx_td\" id=\"S4.T3.2.3.1\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T3.2.3.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.3.3.1\">NICE</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.3.4\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.3.4.1\">SHINE</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.3.5.1\" style=\"font-size:70%;\">LONER w./</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.3.6.1\" style=\"font-size:70%;\">LONER w./</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.3.7\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.3.7.1\">LONER</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2\">\n<td class=\"ltx_td\" id=\"S4.T3.2.2.3\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T3.2.2.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.5.1\">SLAM</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.2.4.1\" rowspan=\"4\">\n<span class=\"ltx_text\" id=\"S4.T3.2.4.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.2.4.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T3.2.4.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.4.1.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T3.2.4.1.1.1.1.1.1\" style=\"width:6.8pt;height:23.8pt;vertical-align:-8.5pt;\"><span class=\"ltx_transformed_inner\" style=\"width:23.8pt;transform:translate(-8.46pt,0pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S4.T3.2.4.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.4.1.1.1.1.1.1.1.1\">MCR</span></span>\n</span></span></span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.4.2\">Acc.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.4.3\">0.621</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.4.4\">0.164</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.4.5.1\">0.110</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.4.6\">0.153</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.4.7\">0.186</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.5.1\">Cmp.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.5.2\">0.419</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.5.3\">0.075</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.5.4\">0.080</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.5.5\">0.102</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.5.6.1\">0.069</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.6.1\">Prec.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.6.2\">0.124</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.6.3\">0.624</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.6.4.1\">0.665</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.6.5\">0.449</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.6.6\">0.473</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.7.1\">Rec.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.7.2\">0.476</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.7.3\">0.757</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.7.4.1\">0.940</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.7.5\">0.884</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.7.6\">0.932</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.2.8.1\" rowspan=\"4\">\n<span class=\"ltx_text\" id=\"S4.T3.2.8.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.2.8.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T3.2.8.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.8.1.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T3.2.8.1.1.1.1.1.1\" style=\"width:6.8pt;height:35.8pt;vertical-align:-14.5pt;\"><span class=\"ltx_transformed_inner\" style=\"width:35.8pt;transform:translate(-14.5pt,0pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S4.T3.2.8.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.8.1.1.1.1.1.1.1.1\">Canteen</span></span>\n</span></span></span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.8.2\">Acc.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.8.3\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S4.T3.2.8.3.1\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.8.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.8.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.8.6\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.8.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.9.1\">Cmp.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.9.2\">0.116</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.9.3\">0.220</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.9.4\">0.190</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.9.5.1\">0.105</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.10.1\">Prec.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.10.2\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.10.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.10.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.10.5\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.11.1\">Rec.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.11.2\">0.753</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.11.3\">0.524</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.11.4\">0.846</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.11.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.11.5.1\">0.878</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.2.12.1\" rowspan=\"4\">\n<span class=\"ltx_text\" id=\"S4.T3.2.12.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.2.12.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T3.2.12.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.12.1.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T3.2.12.1.1.1.1.1.1\" style=\"width:6.9pt;height:32.3pt;vertical-align:-12.7pt;\"><span class=\"ltx_transformed_inner\" style=\"width:32.3pt;transform:translate(-12.69pt,0pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S4.T3.2.12.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.12.1.1.1.1.1.1.1.1\">Garden</span></span>\n</span></span></span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.12.2\">Acc.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.12.3\" rowspan=\"4\"><span class=\"ltx_text\" id=\"S4.T3.2.12.3.1\">\u2717</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.12.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.12.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.12.6\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.12.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.13.1\">Cmp.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.13.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.13.2.1\">0.130</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.13.3\">0.333</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.13.4\">0.539</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.13.5\">0.157</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.14.1\">Prec.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.14.2\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.14.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.14.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.14.5\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.15\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.15.1\">Rec.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.15.2\">0.657</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.15.3\">0.469</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.15.4\">0.623</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.15.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.15.5.1\">0.784</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.2.16.1\" rowspan=\"4\">\n<span class=\"ltx_text\" id=\"S4.T3.2.16.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.2.16.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T3.2.16.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.16.1.1.1.1.1\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T3.2.16.1.1.1.1.1.1\" style=\"width:8.9pt;height:23.8pt;vertical-align:-9.4pt;\"><span class=\"ltx_transformed_inner\" style=\"width:23.9pt;transform:translate(-7.5pt,2.92pt) rotate(-90deg) ;\">\n<span class=\"ltx_p\" id=\"S4.T3.2.16.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.16.1.1.1.1.1.1.1.1\">Quad</span></span>\n</span></span></span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.2.16.2\">Acc.</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.16.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.16.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.16.4.1\">0.301</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.16.5\">0.663</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.16.6\">0.552</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.2.16.7\">0.380</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.17.1\">Cmp.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.17.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.17.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.17.3.1\">0.148</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.17.4\">0.543</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.17.5\">0.895</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.17.6\">0.373</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.18.1\">Prec.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.18.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.18.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.18.3.1\">0.453</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.18.4\">0.150</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.18.5\">0.127</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.18.6\">0.327</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.19\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.2.19.1\">Rec.</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.19.2\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.19.3\">0.717</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.19.4\">0.602</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.19.5\">0.484</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.2.19.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.19.6.1\">0.809</span></td>\n</tr>\n</table>\n</figure>",
|
| 204 |
+
"capture": "TABLE III: Comparison of map Acccuracy (m), Completion (m), Precision, and Recall between proposed and baseline algorithms. Unlike the others, SHINE used ground truth poses. A \u2018-\u2019 indicates invalid configurations, while \u2018\u2717\u2019 indicates that the algorithm failed."
|
| 205 |
+
},
|
| 206 |
+
"4": {
|
| 207 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>We perform an ablation study over the SLAM framework by disabling key features, and show the proposed system outperforms alternatives.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T4.19\">\n<tr class=\"ltx_tr\" id=\"S4.T4.19.20\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.19.20.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.20.1.1\">Pose</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.19.20.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.20.2.1\">Sky</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.19.20.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.20.3.1\">Random</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" colspan=\"2\" id=\"S4.T4.19.20.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.20.4.1\">MCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" colspan=\"2\" id=\"S4.T4.19.20.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.20.5.1\">Canteen</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" colspan=\"2\" id=\"S4.T4.19.20.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.20.6.1\">Garden</span></td>\n<td class=\"ltx_td ltx_align_center\" colspan=\"2\" id=\"S4.T4.19.20.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.20.7.1\">Quad</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.5.1\">Optimization</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.6.1\">Segmentation</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.4.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.7.1\">KeyFrames</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.4.4.8\">L1 Depth</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.4.4.9\">L1 Depth</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.4.4.10\">L1 Depth</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.11\">L1 Depth</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.5.5.1\"><svg class=\"ltx_picture\" height=\"7.7\" id=\"S4.T4.5.5.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.7\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.7) matrix(1 0 0 -1 0 0) translate(3.85,0) translate(0,3.85)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"fill:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.6.6.2\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.6.6.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.7.7.3\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.7.7.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.7.7.4\">0.077</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.7.7.5\">0.317</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.7.7.6\">1.162</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.7.7.7\">2.927</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.7.7.8\">1.223</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.7.7.9\">3.305</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.7.7.10\">0.862</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.7.7.11\">3.302</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.8.8.1\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.8.8.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.9.9.2\"><svg class=\"ltx_picture\" height=\"7.7\" id=\"S4.T4.9.9.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.7\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.7) matrix(1 0 0 -1 0 0) translate(3.85,0) translate(0,3.85)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"fill:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.10.10.3\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.10.10.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.10.10.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.10.10.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.10.10.6\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.10.10.7\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.10.10.8\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.10.10.9\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.10.10.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.10.10.10.1\">0.128</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.10.10.11\">0.994</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.13.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.11.11.1\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.11.11.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.12.12.2\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.12.12.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.13.13.3\"><svg class=\"ltx_picture\" height=\"7.7\" id=\"S4.T4.13.13.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.7\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.7) matrix(1 0 0 -1 0 0) translate(3.85,0) translate(0,3.85)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"fill:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.13.13.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.13.13.4.1\">0.028</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.13.13.5\">0.289</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.13.13.6\">0.079</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.13.13.7\">1.573</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.13.13.8\">0.066</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.13.13.9\">1.237</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.13.13.10\">0.219</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.13.13.11\">1.363</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.16.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.14.14.1\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.14.14.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.15.15.2\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.15.15.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.16.16.3\"><svg class=\"ltx_picture\" height=\"7.7\" id=\"S4.T4.16.16.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.7\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.7) matrix(1 0 0 -1 0 0) translate(3.85,0) translate(0,3.85)\"><path d=\"M 0 0 L 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 Z\"></path><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"fill:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.16.16.4\">0.029</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.16.16.5\">0.285</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.16.16.6\">0.065</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.16.16.7\">1.298</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.16.16.8\">0.057</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.16.16.9\">1.261</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.16.16.10\">0.130</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.16.16.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.16.16.11.1\">0.829</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.19.19\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.17.17.1\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.17.17.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.18.18.2\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.18.18.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.19.19.3\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T4.19.19.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.19.19.4\">0.029</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.19.19.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.19.5.1\">0.284</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.19.19.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.19.6.1\">0.064</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.19.19.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.19.7.1\">1.296</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.19.19.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.19.8.1\">0.056</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.19.19.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.19.19.9.1\">1.198</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.19.19.10\">0.130</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.19.19.11\">0.880</td>\n</tr>\n</table>\n</figure>",
|
| 208 |
+
"capture": "TABLE IV: We perform an ablation study over the SLAM framework by disabling key features, and show the proposed system outperforms alternatives."
|
| 209 |
+
},
|
| 210 |
+
"5": {
|
| 211 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE V: </span>We perform an ablation study over the loss. The first row uses only depth loss. The second uses depth loss and LOS loss with no dynamic margin. The third row uses LOS Loss with dynamic margin. The final row is the proposed system.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T5.15\">\n<tr class=\"ltx_tr\" id=\"S4.T5.15.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.15.16.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.16.1.1\">Depth</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.15.16.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.16.2.1\">LOS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.15.16.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.16.3.1\">Dynamic</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" colspan=\"2\" id=\"S4.T5.15.16.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.16.4.1\">MCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" colspan=\"2\" id=\"S4.T5.15.16.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.16.5.1\">Canteen</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" colspan=\"2\" id=\"S4.T5.15.16.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.16.6.1\">Garden</span></td>\n<td class=\"ltx_td ltx_align_center\" colspan=\"2\" id=\"S4.T5.15.16.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.16.7.1\">Quad</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.4.5.1\">Loss</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.4.6.1\">Loss</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.4.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.4.4.7.1\">Margin</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.4.4.8\">L1 Depth</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.4.4.9\">L1 Depth</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.4.4.10\">L1 Depth</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.4.4.11\">L1 Depth</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.5.5.1\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T5.5.5.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.6.6.2\"><svg class=\"ltx_picture\" height=\"7.7\" id=\"S4.T5.6.6.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.7\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.7) matrix(1 0 0 -1 0 0) translate(3.85,0) translate(0,3.85)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"fill:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.6.6.3\">N/A</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.6.6.4\">0.046</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.6.6.5\">0.355</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.6.6.6\">1.236</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.6.6.7\">3.014</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.6.6.8\">0.788</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.6.6.9\">2.447</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.6.6.10\">0.779</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.6.6.11\">2.265</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.9.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.7.7.1\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T5.7.7.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.8.8.2\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T5.8.8.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.9.9.3\"><svg class=\"ltx_picture\" height=\"7.7\" id=\"S4.T5.9.9.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.7\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.7) matrix(1 0 0 -1 0 0) translate(3.85,0) translate(0,3.85)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"fill:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.9.9.4\">0.033</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.9.9.5\">0.338</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.9.9.6\">0.075</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.9.9.7\">1.453</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.9.9.8\">0.076</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.9.9.9\">1.304</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.9.9.10\">0.570</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.9.9.11\">1.747</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.10.10.1\"><svg class=\"ltx_picture\" height=\"7.7\" id=\"S4.T5.10.10.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.7\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.7) matrix(1 0 0 -1 0 0) translate(3.85,0) translate(0,3.85)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"fill:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.11.11.2\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T5.11.11.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.12.12.3\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T5.12.12.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.12.12.4\">0.030</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.12.12.5\">0.358</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.12.12.6\">0.068</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.12.12.7\">1.907</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.12.12.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.12.12.8.1\">0.056</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T5.12.12.9\">1.490</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.12.12.10\">0.154</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T5.12.12.11\">2.262</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T5.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.13.13.1\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T5.13.13.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.14.14.2\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T5.14.14.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.15.15.3\"><svg class=\"ltx_picture\" height=\"7.15\" id=\"S4.T5.15.15.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"7.15\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,7.15) matrix(1 0 0 -1 0 0) translate(3.57,0) translate(0,3.57)\"><path d=\"M 0 0 M 3.57 0 C 3.57 1.97 1.97 3.57 0 3.57 C -1.97 3.57 -3.57 1.97 -3.57 0 C -3.57 -1.97 -1.97 -3.57 0 -3.57 C 1.97 -3.57 3.57 -1.97 3.57 0 Z M 0 0\" style=\"stroke:none\"></path></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.15.15.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.15.4.1\">0.029</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.15.15.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.15.5.1\">0.284</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.15.15.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.15.6.1\">0.064</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.15.15.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.15.7.1\">1.296</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.15.15.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.15.8.1\">0.056</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T5.15.15.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.15.9.1\">1.198</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.15.15.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.15.10.1\">0.130</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T5.15.15.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T5.15.15.11.1\">0.880</span></td>\n</tr>\n</table>\n</figure>",
|
| 212 |
+
"capture": "TABLE V: We perform an ablation study over the loss. The first row uses only depth loss. The second uses depth loss and LOS loss with no dynamic margin. The third row uses LOS Loss with dynamic margin. The final row is the proposed system."
|
| 213 |
+
}
|
| 214 |
+
},
|
| 215 |
+
"image_paths": {
|
| 216 |
+
"1": {
|
| 217 |
+
"figure_path": "2309.04937v3_figure_1.png",
|
| 218 |
+
"caption": "Figure 1: LONER reconstruction on a courtyard scene [4]. The top-right is a mesh reconstruction with the estimated trajectory in red. The surrounding images are rendered depth images from novel views outside of the training trajectory, demonstrating LONER\u2019s ability to reconstruct dense novel views of an environment.",
|
| 219 |
+
"url": "http://arxiv.org/html/2309.04937v3/extracted/5453176/figures/fig1_v4.png"
|
| 220 |
+
},
|
| 221 |
+
"2": {
|
| 222 |
+
"figure_path": "2309.04937v3_figure_2.png",
|
| 223 |
+
"caption": "Figure 2: LONER system overview. Incoming scans are decimated then tracked with ICP, after which the sky is segmented from the scene geometry. Selected scans are chosen as KeyFrames. Each map update includes the current KeyFrame and randomly selected past KeyFrames. Our novel loss function is used to update the poses and MLP weights. The resulting implicit map can be rendered offline to a variety of formats including depth images and meshes.",
|
| 224 |
+
"url": "http://arxiv.org/html/2309.04937v3/extracted/5453176/figures/system_overview_v10.png"
|
| 225 |
+
},
|
| 226 |
+
"3": {
|
| 227 |
+
"figure_path": "2309.04937v3_figure_3.png",
|
| 228 |
+
"caption": "Figure 3: Illustration of the difference between the JS loss and the LOS loss.\nThe LOS loss sets a uniform margin \u03f5italic-\u03f5\\epsilonitalic_\u03f5 for rays pointing to both learned and unobserved regions.\nThis strategy corrupts the learned information by forcing learned regions to predict higher variances. In contrast, the proposed JS loss sets the dynamic margin \u03f5italic-\u03f5\\epsilonitalic_\u03f5 for each ray depending on the similarity between goal distribution and predicted sample distribution. The JS loss sets higher margins for rays in unobserved regions to improve convergence, and sets lower margins for rays in learned regions to refine learned geometry.",
|
| 229 |
+
"url": "http://arxiv.org/html/2309.04937v3/x1.png"
|
| 230 |
+
},
|
| 231 |
+
"4": {
|
| 232 |
+
"figure_path": "2309.04937v3_figure_4.png",
|
| 233 |
+
"caption": "Figure 4: Training on a single LiDAR scan from the CARLA simulator indicates that the JS loss function converges faster than alternatives. The left images show simulated camera and LiDAR data. The plot on the right compares MSE (m22{}^{2}start_FLOATSUPERSCRIPT 2 end_FLOATSUPERSCRIPT) between groundtruth depth and estimated depth throughout training.",
|
| 234 |
+
"url": "http://arxiv.org/html/2309.04937v3/extracted/5453176/figures/loss_convergence_fig_v6.png"
|
| 235 |
+
},
|
| 236 |
+
"5": {
|
| 237 |
+
"figure_path": "2309.04937v3_figure_5.png",
|
| 238 |
+
"caption": "Figure 5: Reconstruction of meshes on each sequence with the benchmarked algorithms. LONER and SHINE offer the most complete and detailed results. SHINE has slightly more complete geometry, noticeable in the top-left of the Quad images where LONER omits pillars captured by SHINE. However, LONER captures details better and has fewer artifacts.",
|
| 239 |
+
"url": "http://arxiv.org/html/2309.04937v3/extracted/5453176/figures/reconstruction_v9.png"
|
| 240 |
+
},
|
| 241 |
+
"6": {
|
| 242 |
+
"figure_path": "2309.04937v3_figure_6.png",
|
| 243 |
+
"caption": "Figure 6: The depth images rendered from the MLP trained by LONER with different loss functions. The depth loss provides blurry geometry with limited training samples. The LOS loss with a fast decay rate provides more detailed geometry but worse hole-filling. In contrast, the LOS loss with a slow decay rate estimates the untrained region better but results in blurry geometry. The proposed JS loss combines the advantages of both fast and slow decay rates, which provides good hole-filling results while preserving geometry details.",
|
| 244 |
+
"url": "http://arxiv.org/html/2309.04937v3/extracted/5453176/figures/depth_image_for_loss_comparison_v2.png"
|
| 245 |
+
}
|
| 246 |
+
},
|
| 247 |
+
"validation": true,
|
| 248 |
+
"references": [],
|
| 249 |
+
"url": "http://arxiv.org/html/2309.04937v3"
|
| 250 |
+
}
|
20240323/2309.06380v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2309.08865v3.json
ADDED
|
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "ARTEMIS: AI-driven Robotic Triage Labeling and Emergency Medical Information System",
|
| 3 |
+
"abstract": "Mass casualty incidents (MCIs) pose a significant challenge to emergency medical services by overwhelming available resources and personnel. Effective victim assessment is key to minimizing casualties during such a crisis. We introduce ARTEMIS, an AI-driven Robotic Triage labeling and Emergency Medical Information System, to aid first responders in MCI events. It leverages speech processing, natural language processing, and deep learning to help with acuity labeling. This is deployed on a quadruped that performs victim localization and preliminary injury severity assessment. First responders access victim information through a Graphical User Interface (GUI) that is updated in real-time. For validation, an algorithmic triage protocol is proposed, using the Unitree Go1 quadruped. The robot identifies humans, interacts with them, gets vitals and information, and assigns an acuity label. Simulations of an MCI in software and a controlled environment outdoors were conducted. The system achieved a triage-level classification accuracy of over 74% on average and 99% for the most critical victims, i.e. level 1 acuity, outperforming state-of-the-art deep learning-based triage classification systems. In this paper, we showcase the potential of human-robot interaction in assisting medical personnel in MCI events.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Mass Casualty Incidents (MCIs) globally cause significant loss of life and infrastructure damage. These incidents, contributing to tens of thousands of casualties worldwide, highlight the urgency for effective management. In 2023, the United States experienced nine MCIs related to climate crisis, leading to over $1 billion in damages and 99 deaths. The frequency of MCIs has been rising over recent decades [1 ###reference_b1###, 2 ###reference_b2###], posing challenges for first responders, who must manage overwhelming caseloads with limited resources under time constraints. Efficient triage systems are essential to prioritize treatment and manage resources effectively in such high-pressure situations and robotics.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### During an MCI, first responders are tasked with locating all the victims and perform the preliminary assessment. Triage is defined as the process of preliminary assessment of victims in order to determine the urgency of their need for treatment and the nature of treatment required. This is a two-stage process: the primary triage, which is rapidly done on-site, and the secondary triage, which is done once the patients enter the treatment area. Leveraging robots during MCIs can help expedite the localization process since robots have better mobility due to their compact form factor [3 ###reference_b3###, 4 ###reference_b4###]. Additionally, in hazardous conditions, employing robot teams for localization serves to mitigate the inherent risks faced by first responders, offering a safer and faster approach to the critical tasks at hand [5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###].\nWhile robots have been used in emergency departments (EDs) for triage in assisting doctors and first responders to treat and get vitals from patients [8 ###reference_b8###, 9 ###reference_b9###] and in MCIs for search & rescue [10 ###reference_b10###, 11 ###reference_b11###], there exists a lot of potential for developing robots that are tightly coupled with first responders in real-time. Incorporating robots into disaster response missions offers the potential to gather crucial real-time data and relay on-the-ground information.\nHowever, past deployment of robots in disaster settings have faced many issues, such as human operator error, imperfect autonomy, poor interface, lack of robustness, inefficient power systems, legal challenges, incorrect parameter tuning, etc. [12 ###reference_b12###, 13 ###reference_b13###]. This work attempts to address some of these challenges. A quadruped is utilized for better mobility in rugged terrain [14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###]. Recognizing their role as assistive tools rather than replacements for human responders in MCI scenarios, the quadruped\u2019s only function is to conduct localization and preliminary triage classification in scenarios with limited medical personnel, enabling first responders to efficiently prioritize subsequent assessments. We improve on state-of-the-art machine learning triage models to achieve better classification accuracy and provide an intuitive user interface for first responders.\nCollecting vitals from humans autonomously or with the help of robots has been proven to be possible with both contact-based and contact-free sensing methods between the robot and the victim [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###]. However, since the hardware necessary for such monitoring shows promise for implementation, this paper shifts its focus toward the operational strategies of robotic systems in assisting first responders in speeding up the primary triage process. Furthermore, we explore the use of machine learning and natural language processing as cost-effective approaches to complement hardware sensors for a more holistic initial triage assessment, under the assumption that in the later stages of triage, medical personnel will be more efficient in using the right diagnostic devices. We do not collect vitals from users in the proposed method, but we show that using such technologies with ARTEMIS would make this a complete system. We propose ARTEMIS or AI-driven Robotic Triage Labeling and Emergency Medical Information System as a robot-driven primary triage framework.\nTo this end, we make the following contributions:\nRobot-driven Victim Identification: The deployment of a pre-trained human detection Mediapipe BlazePose model [20 ###reference_b20###] on an unmanned robot quadruped to identify humans in the operating environment, autonomously approach them, and send their location and trajectory to first responders.\nAI-based Triage Classification: A machine learning approach to rapidly identify the triage levels of the identified victims, that is trained on a publicly available dataset collected by the Department of Emergency Medicine, Yale School of Medicine, New Haven, Connecticut, United States of America [21 ###reference_b21###].\nGraphical Frontend: A user-friendly and informative GUI for real-time communication between triage robots and first responders is created. This helps visually depict mission-critical information such as the obstacle-free path to the victim, vital signs and acuity labels, obtained from triage classification.\nSynthetic Triage Dataset: Y-MED-SYN+, an augmented dataset that is synthetically prepared from the Yale medical dataset [21 ###reference_b21###], tailored for primary triage classification, is published. The dataset can be requested at https://ideas.cs.purdue.edu/research/artemis/ ###reference_is/###.\nThis entire process is depicted in Figure 1 ###reference_###, which shows a quadruped scanning of the environment and approaching victims to perform initial triage classification."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II RELATED WORKS",
|
| 15 |
+
"text": "This section highlights and analyzes some existing ways of deploying Machine Learning with triage protocols, and assesses the kinds of robotics capabilities that have come close to being used for such applications."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Machine Learning for Mass Casualty Incident Triage",
|
| 21 |
+
"text": "Triage is the initial screening examination that divides the patients into multiple categories based on the severity of their injuries. This is a two-stage process, which includes primary (rapid, on-site) and secondary (treatment area) triage. However, there has been much debate in the area to identify what incidents constitute an urgency [22 ###reference_b22###, 23 ###reference_b23###, 24 ###reference_b24###]. The ESI 5-level triage system [25 ###reference_b25###] assists in identifying this and is also able to describe precisely the condition of the patient[26 ###reference_b26###]. Machine Learning (ML) based systems have demonstrated significant potential in triaging patients more effectively in Emergency Departments (ED) by analyzing large datasets to predict the urgency of medical attention required. Studies have shown that ML algorithms can outperform traditional triage methods, facilitating rapid decision-making which is crucial in high-pressure scenarios typical of MCIs [21 ###reference_b21###, 27 ###reference_b27###, 28 ###reference_b28###].\nBuilding on these advancements, the application of robotics equipped with ML-based triage classification in MCIs can further enhance the capabilities of first responders. Robots can navigate through unstructured environments to reach patients, assess their condition using ML algorithms, and categorize them based on the severity of their injuries or medical conditions. This automated triage system can significantly reduce the time taken for initial assessment and enable a more organized response, thereby streamlining rescue operations. By providing accurate, real-time data to first responders, these robotic systems can ensure that medical attention is directed where it\u2019s needed most, enhancing the overall efficiency of emergency response efforts [29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###].\n###figure_6###"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Robot Assistance in Mass Casualty Incidents",
|
| 27 |
+
"text": "In recent years, the use of robots for assistance in mass casualty incidents (MCIs) has garnered significant interest, demonstrating potential to revolutionize emergency response strategies. Historically, research and development in rescue robotics have laid a foundation for their application in MCIs. Notably, some works explored the concept of robot-assisted mass-casualty triage, even in marine scenarios, highlighting the potential of robots in assessing and prioritizing victims in large-scale emergency scenarios [32 ###reference_b32###, 33 ###reference_b33###]. These studies provide valuable insights into the capabilities and limitations of robotics in emergency settings.\nBuilding upon these foundational works, this paper addresses how the integration of triage classification and machine learning (ML) can enhance the effectiveness of robots in MCIs. The incorporation of ML algorithms enables robots to rapidly and accurately assess victims\u2019 medical needs, facilitating more efficient triage and prioritization. This, in turn, aids first responders in making informed decisions, ensuring that critical resources are allocated where they are most needed. Robot-assisted emergency response education and training, which can further benefit from ML-driven triage systems, has also been in the rise recently[34 ###reference_b34###]. By combining the strengths of rescue robotics with advanced ML techniques, we can envisage a future where robots not only assist in navigating and assessing MCIs but also play a crucial role in enhancing the overall response strategy, ultimately improving outcomes in these high-stakes situations."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III Methodology",
|
| 33 |
+
"text": "The architecture of ARTEMIS includes traversing an obstacle-free path to the victim, identification of victims on site with computer vision, followed by location tagging and acuity label assignment. Thereafter, the model relays the data to the first responders through a GUI-based frontend. The following subsections provide detailed descriptions of these steps. The overall pipeline has been illustrated in Fig. 2 ###reference_###."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-A Search Problem Formulation",
|
| 39 |
+
"text": "Victim localization is modeled as a search problem. Define an environment with an unknown number of targets (victims), whose locations are unknown. Let R be the robot placed in a bounded environment [35 ###reference_b35###]. The process of victim localization involves traversing a Hamiltonian Path [36 ###reference_b36###]. A graph G of all the N targets is defined. In this problem setting, neither the number of targets (N) or their locations are known. The robot executes a Hamiltonian walk h(G) on graph G, visiting each target once. Given that the graph may have multiple edges indicating alternate paths between the pair of targets, the problem of finding a Hamiltonian path is NP-complete [36 ###reference_b36###]. When the global information of the graph, i.e. locations of all the N targets and all possible paths are known, this becomes the orienteering problem [37 ###reference_b37###], where the minimum cost of reaching every target, as a function of the distance between two targets, is denoted by\nIn equation 1 ###reference_###, is the cost of traversing between the two targets, which could be a function of distance or battery usage of the robot, and is an indicator variable denoting whether an edge exists between i and j. Using dynamic programming, the problem can be solved with a time complexity of ."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-B Navigation and Trajectory Control",
|
| 45 |
+
"text": "When the number of targets is unknown, at the start of the search, the robot R executes a heuristic search [38 ###reference_b38###]. This can be defined with farthest point sampling, where we have a set of explored points (), which is a subset of all the points in the environment (). In case of localization, these points include the victims that the robot has attended to. The algorithm maximizes the distance between the set of explored points and a randomly chosen point from the environment. The algorithm can be represented as\nDue to a lack of information about N, the algorithm\u2019s termination is determined by the area covered based on the tracked trajectory, where robot R maintains a hierarchical list of points explored and stops the search when the entire area is covered [39 ###reference_b39###]."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-C Patient detection and localization",
|
| 51 |
+
"text": "As the initial step, the quadruped employs visual perception through a camera (connected to an external computer) mounted on it, to conduct a scan of the environment by searching for humans. The captured frames are transmitted to a pre-trained human pose detection model - Media Pipe BlazePose [20 ###reference_b20###] implemented on the external computer, where real-time analysis takes place. This model detects and annotates the pose and presence of humans within the observed scene. Subsequently, based on the position of the detection, the robot corrects its heading and approaches the subject to then proceed with acquiring their vitals. Algorithm 1 ###reference_### summarizes the process of detection and approach."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.4",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-D Location tagging the victim",
|
| 57 |
+
"text": "Apart from the acuity label assignment, it is important to share the victim\u2019s location with the emergency responders to make it easier to reach the victims. It is also important to share the path the robot takes to reach the victim because this path can be assumed to be free from obstacles such as rubble. This is done with the help of an onboard inertial measurement unit (IMU) that helps update the robot\u2019s trajectory as it moves."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.5",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "III-E Triage Classification",
|
| 63 |
+
"text": "Following the detection of victims, the next step is the preliminary assessment by the quadruped. The quadruped initiates a series of questions asking the victim to state their primary symptoms. Victims undergo the primary triage process, wherein the assigned label is determined partly by their responses (chief complaints such as \u201cknee pain\u201d, \u201cchest pain\u201d, or \u201cnumbness\u201d). The acuity labels range from 1 to 5, with acuity label 1 indicating the most critical condition, such as low vitals or being unresponsive and acuity label 5 indicates that the victim is fairly stable. Additionally, the quadruped may be equipped with non-invasive contact sensors capable of gathering vital signs, including pulse, oxygen saturation, temperature and blood pressure."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.6",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "III-F Human-Robot-Interaction",
|
| 69 |
+
"text": "To effectively interact with humans, Python\u2019s Speech Recognition and pyttsx3 packages, with which it can communicate verbally, are deployed on the robot. A standard American English female voice is used since the quadruped was tested in the United States. However, it is possible to change the accent to the region in which the robot is deployed to ensure that victims have no difficulty understanding the robot. At the moment, ARTEMIS only supports the English language."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.7",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "III-G Graphical User Interface (GUI)",
|
| 75 |
+
"text": "A key feature for communicating with first responders is a graphical user interface (GUI) that updates victim information in real-time. The victim\u2019s chief complaints, acuity label, trajectory, and location are transmitted from the robots on-site to a central server, which can be accessed from a web-based front end."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "IV Experiments and Results",
|
| 81 |
+
"text": "In this section, the datasets used to train the triage classifier are described (Sec. IV-A ###reference_###), followed by the data pre-processing steps taken (Sec. IV-B ###reference_###). The ML models evaluated are described and compared against state-of-the-art models in Section IV-C ###reference_###. The specifications of the robot prototype used for field trials (Sec. IV-D ###reference_###) are explained (Sec. IV-A ###reference_###), followed by the performance evaluation (Sec. IV-E ###reference_###)."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.1",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-A Dataset",
|
| 87 |
+
"text": "To train the machine learning models, a de-identified dataset collected by the Yale School of Medicine [21 ###reference_b21###] is used. This is referred to as the \u201cY-MED\u201d (Yale Medical Dataset). This is a publicly accessible dataset containing patient medical data, which was collected from the emergency department in the Yale New Haven health system between March 2014 and July 2017. The Y-MED contains patient age, vital measurements (temperature, heart rate, saturation, age), chief complaints (knee pain, chest pain etc), and medical history in addition to the acuity level, which follows the ESI scale. In order to predict the acuity level of patients in a mass casualty incident, vitals, age, and chief complaints are the only attributes of relevance. The MIMIC-IV-ED triage dataset from BIDMC [40 ###reference_b40###] was also used to train models. This dataset (referred to here as \u201cMIMIC\u201d) only contains data for the six vital signs."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.2",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "IV-B Data Pre-processing",
|
| 93 |
+
"text": "Out of the original 972 attributes in Y-MED, only 207 of them are deemed relevant. After removing outliers and empty records based on [25 ###reference_b25###], the dataset is left with 268,469 of the original 560,486 records. Of the original 425,087 records in MIMIC, 224,736 records remained after applying the same cleaning process.\nTo help the models learn better and to improve the generalizability of the model, the records are re-scaled using z-score normalization, using the mean and standard deviation of the cleaned dataset.\nAdditionally, both the cleaned MIMIC and Y-MED have a significant class imbalance. In MIMIC, only 0.21% of the dataset represent patients with acuity level 5 and 53% represent patients with acuity level 3. In the Y-MED, only 0.1% represent patients with acuity level 1 and 44% represent patients with acuity level 3. To equalize the distribution, data is generated synthetically using Synthetic Minority Over-sampling Technique (SMOTE) [41 ###reference_b41###]. Each of the undersampled classes is augmented with synthetic samples generated along the line segments joining the k-nearest neighbors of samples within the class. After oversampling, Y-MED has 589,260 records and MIMIC has 593,170. The augmented and class-balanced Y-MED dataset is published as the Y-MED-SYN+ dataset."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.3",
|
| 97 |
+
"parent_section_id": "4",
|
| 98 |
+
"section_name": "IV-C Models",
|
| 99 |
+
"text": "In the following subsections, the machine learning architectures that were used for triage classification are discussed. The input for each model is a 207 attribute vector acquired from the victim by the robot."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "4.3.1",
|
| 103 |
+
"parent_section_id": "4.3",
|
| 104 |
+
"section_name": "IV-C1 Multi-Layer Perceptron (Proposed)",
|
| 105 |
+
"text": "The multi-layer perceptron (MLP) is built using TensorFlow. The network has four hidden layers, each with 50 units and a bias unit. Each of these layers uses the Rectified Linear Unit (ReLU) activation function. The output layer has five units, as well as the Softmax activation. We use Adamax Optimization with a learning rate of 0.01 and a weight decay of 1e-6. Categorical cross entropy is used as the loss function. The MLP is trained for 5000 steps. The weights of the model at the highest validation accuracy are saved and used for prediction."
|
| 106 |
+
},
|
| 107 |
+
{
|
| 108 |
+
"section_id": "4.3.2",
|
| 109 |
+
"parent_section_id": "4.3",
|
| 110 |
+
"section_name": "IV-C2 Random Forest",
|
| 111 |
+
"text": "Python\u2019s scikit-learn implementation of random forest with 100 estimators is used. Bootstrapping is also employed within the forest to improve the classification accuracy. The Gini Index is used as the criteria for feature splitting."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"section_id": "4.3.3",
|
| 115 |
+
"parent_section_id": "4.3",
|
| 116 |
+
"section_name": "IV-C3 Gaussian Naive Bayes",
|
| 117 |
+
"text": "The Gaussian Naive Bayes approach is considered due to the presence of many continuous variables such as saturation, temperature etc. This algorithm assumes the attributes follow the Gaussian distribution and computes the probability as\nWhere, and are the mean and variance of a class c. The model is implemented with Python\u2019s scikit-learn package."
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"section_id": "4.3.4",
|
| 121 |
+
"parent_section_id": "4.3",
|
| 122 |
+
"section_name": "IV-C4 Support Vector Machine",
|
| 123 |
+
"text": "For multi-class classification, a one vs. one support vector machine model with the radial basis function kernel scaled by is built, where M is the number of features and is the variance of the feature. A regularization of 1 is used. This is created with Python\u2019s scikit-learn package."
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"section_id": "4.4",
|
| 127 |
+
"parent_section_id": "4",
|
| 128 |
+
"section_name": "IV-D Go1 Quadruped Robot",
|
| 129 |
+
"text": "###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### Fig. 2 ###reference_### shows the overall system architecture. The quadruped is a Unitree Go1 with an external Logitech C920 RGB camera mounted onto it. The robot control is set up in Robot Operating System 1 (ROS1) and communicates with the robot\u2019s onboard Raspberry Pi 4. A Mediapipe Blazepose pose detection algorithm helps the robot detect human pose in the environment. As the robot moves and searches for a human pose, it uses a proportional gain based yaw correction algorithm to adjust its position and approach humans in the environment as detailed in Algorithm 1 ###reference_###. The robot is also equipped with ultrasonic range sensors that make sure the robot avoids obstacles on the way and is able to approach humans and stop within a threshold distance. The robot\u2019s IMU keeps track of the trajectory of the robot, and these are shared with first responders via a Graphical User Interface (GUI).\nA controlled, simulated demonstration of the victim detection, localization, and triage classification is conducted in an outdoor environment with the quadruped, as shown in Figure 3 ###reference_###.\n###figure_12###"
|
| 130 |
+
},
|
| 131 |
+
{
|
| 132 |
+
"section_id": "4.5",
|
| 133 |
+
"parent_section_id": "4",
|
| 134 |
+
"section_name": "IV-E Evaluation",
|
| 135 |
+
"text": "Precision, recall, F1 score, and accuracy are used to evaluate the models, as shown in table I ###reference_###. The MLP model, trained on Y-MED dataset, has the highest precision as well as accuracy in classifying acuity level 1, which represents the most critical patients. However, due to class imbalance, it performs poorly on MIMIC dataset in classifying all classes except acuity level 5. Gaussian Naive Bayes has 100% accuracy in classifying acuity level 1 patients (see figure 3(b) ###reference_sf2###) but has a very low precision for the same class. Figure 3(d) ###reference_sf4### shows the mis-classification tendencies of the MLP model. In Figure 3(d) ###reference_sf4###, it can be observed that the majority of the mis-classified victims were classified into a class that was adjacent to the true class.\nRandom forest has the best precision in identifying patients with acuity levels 2, 3, 4, and 5, as seen from figure 3(c) ###reference_sf3###. However, the random forest has a lower recall than the MLP, indicating that there are a lot of false negatives. As shown in Figure 3(a) ###reference_sf1###, muli-class SVM has the overall lowest performance among all models.\nARTEMIS hence deploys a 5-layer MLP with the augmented Y-MED dataset, which also has better results than the baseline models compared against. Table II ###reference_### shows the performance of ARTEMIS against other state-of-the-art machine learning-based triage classification models."
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"section_id": "4.6",
|
| 139 |
+
"parent_section_id": "4",
|
| 140 |
+
"section_name": "IV-F Discussion",
|
| 141 |
+
"text": "ARTEMIS shows how robots can be used to assist first responders in the process of attending to victims in MCIs. The use of sensors for vital signs acquisition can help improve the accuracy of triage classification; however, it is to be noted that these sensors will have additional battery overhead [46 ###reference_b46###]. With the pre-processing done on the dataset used, class 1, which is the most severe acuity class, may not be representative of the ideal case as the SMOTE up-sampling increases the number of data points in class 1 significantly. Additionally, deploying the quadruped in real MCIs is not possible due to research constraints and therefore, as shown in Figure 5 ###reference_###, localization within a simulated MCI environment in Gazebo was performed. The environment is set up using CHAMP framework in ROS1 [42 ###reference_b42###]."
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"section_id": "5",
|
| 145 |
+
"parent_section_id": null,
|
| 146 |
+
"section_name": "Conclusion",
|
| 147 |
+
"text": "ARTEMIS, a framework for MCIs to automate primary triage classification with mobile robots is presented. A quadruped equipped with pose detection models that can perform preliminary triage classification is shown, outperforming state-of-the-art machine learning based triage classification models. ARTEMIS outperforms state-of-the-art acuity labeling models and shows promise for integrating robots to assist in the work of first responders in an MCI.\nWith future work, while current research employs a single quadruped disaster response setting, it necessitates the adoption of a heterogeneous system comprising both Unmanned Ground Vehicles (UGVs) and Unmanned Aerial Vehicles (UAVs) for improved efficiency. The primary challenge in such a system lies in devising effective power management strategies to optimize performance. Incorporating models such as DREAM [47 ###reference_b47###], which discusses the development of efficient power management strategies in heterogeneous setups, would be a preliminary step. A potential avenue for future exploration involves integrating heterogeneous coordination strategies, particularly leveraging machine learning for target localization in previously unseen environments, as proposed in [48 ###reference_b48###, 49 ###reference_b49###]. The incorporation of such methodologies allows ARTEMIS to be transformed into a decentralized and more adaptive robotic framework."
|
| 148 |
+
}
|
| 149 |
+
],
|
| 150 |
+
"appendix": [],
|
| 151 |
+
"tables": {
|
| 152 |
+
"1": {
|
| 153 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T1.2\" style=\"width:208.1pt;height:227.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-109.7pt,120.1pt) scale(0.486834721516786,0.486834721516786) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.1.1.1\">Dataset</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.1.2.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.1.3.1\">Overall Accuracy</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.1.4.1\">Class</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.1.5.1\">Precision</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.1.6.1\">Recall</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.1.7.1\">F1 Score</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.1.8.1\">Accuracy</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.1\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.1.1\">MIMIC</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.2\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1.2.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.2.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.2.2.1.1.1.1\">5-Layer</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.2.2.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.2.2.1.1.2.1\">MLP (Proposed Model)</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.3\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.3.1\">59</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.4.1\" style=\"background-color:#EFEFEF;\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.5.1\" style=\"background-color:#EFEFEF;\">0.80</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.6\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.6.1\" style=\"background-color:#EFEFEF;\">0.64</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.7\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.7.1\" style=\"background-color:#EFEFEF;\">0.71</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.8\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.8.1\" style=\"background-color:#EFEFEF;\">0.64</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.3.1\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.3.2\">0.43</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.3.3\">0.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.3.4\">0.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.3.5\">0.35</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.4.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.1.1\" style=\"background-color:#EFEFEF;\">3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.4.2\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.2.1\" style=\"background-color:#EFEFEF;\">0.34</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.4.3\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.3.1\" style=\"background-color:#EFEFEF;\">0.41</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.4.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.4.1\" style=\"background-color:#EFEFEF;\">0.37</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.4.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.5.1\" style=\"background-color:#EFEFEF;\">0.41</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.5.1\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.5.2\">0.52</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.5.3\">0.57</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.5.4\">0.54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.5.5\">0.57</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.6.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.6.1.1\" style=\"background-color:#EFEFEF;\">5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.6.2\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.6.2.1\" style=\"background-color:#EFEFEF;\">0.90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.6.3\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.6.3.1\" style=\"background-color:#EFEFEF;\">0.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.6.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.6.4.1\" style=\"background-color:#EFEFEF;\">0.94</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.6.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.6.5.1\" style=\"background-color:#EFEFEF;\">0.98</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.7.1\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.7.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1.7.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.7.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.7.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.7.1.1.1.1.1.1\">Yale</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.7.1.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.7.1.1.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.7.1.1.1.2.1.1\">Dataset (Y-MED)</span></span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.7.2\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.7.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1.7.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.7.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.7.2.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.7.2.1.1.1.1.1\">5-Layer</span></span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.7.2.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.7.2.1.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.7.2.1.1.2.1.1\">MLP (Proposed Model)</span></span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.7.3\" rowspan=\"5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.7.3.1\">74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.7.4\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.7.5.1\">0.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.7.6\">0.99</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.7.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.7.7.1\">0.99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.7.8\">0.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.8.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.8.1.1\" style=\"background-color:#EFEFEF;\">2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.8.2\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.8.2.1\" style=\"background-color:#EFEFEF;\">0.76</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.8.3\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.8.3.1\" style=\"background-color:#EFEFEF;\">0.71</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.8.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.8.4.1\" style=\"background-color:#EFEFEF;\">0.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.8.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.8.5.1\" style=\"background-color:#EFEFEF;\">0.71</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.9.1\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.9.2\">0.62</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.9.3.1\">0.55</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.9.4\">0.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.9.5.1\">0.55</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.10.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.10.1.1\" style=\"background-color:#EFEFEF;\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.10.2\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.10.2.1\" style=\"background-color:#EFEFEF;\">0.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.10.3\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.10.3.1\" style=\"background-color:#EFEFEF;\">0.69</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.10.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.10.4.1\" style=\"background-color:#EFEFEF;\">0.65</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.10.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.10.5.1\" style=\"background-color:#EFEFEF;\">0.69</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.11.1\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.11.2\">0.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.11.3\">0.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.11.4\">0.74</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.11.5\">0.75</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.12.1\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.12.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1.12.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.12.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.12.1.1.1.1.1\">Yale</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.12.1.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.12.1.1.1.2.1\">Dataset (Y-MED)</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.12.2\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.12.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1.12.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.12.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.12.2.1.1.1.1\">Random</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.12.2.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.12.2.1.1.2.1\">Forest</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.12.3\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.12.3.1\">73</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.12.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.12.4.1\" style=\"background-color:#EFEFEF;\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.12.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.12.5.1\" style=\"background-color:#EFEFEF;\">0.53</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.12.6\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.12.6.1\" style=\"background-color:#EFEFEF;\">0.99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.12.7\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.12.7.1\" style=\"background-color:#EFEFEF;\">0.69</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.12.8\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.12.8.1\" style=\"background-color:#EFEFEF;\">0.99</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.13.1\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.13.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.13.2.1\">0.87</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.13.3\">0.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.13.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.13.4.1\">0.76</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.13.5\">0.68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.14.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.14.1.1\" style=\"background-color:#EFEFEF;\">3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.14.2\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.14.2.1\" style=\"background-color:#EFEFEF;\">0.78</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.14.3\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.14.3.1\" style=\"background-color:#EFEFEF;\">0.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.14.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.14.4.1\" style=\"background-color:#EFEFEF;\">0.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.14.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.14.5.1\" style=\"background-color:#EFEFEF;\">0.50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.15.1\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.15.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.15.2.1\">0.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.15.3\">0.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.15.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.15.4.1\">0.68</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.15.5\">0.58</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.16.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.16.1.1\" style=\"background-color:#EFEFEF;\">5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.16.2\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.16.2.1\" style=\"background-color:#EFEFEF;\">0.91</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.16.3\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.16.3.1\" style=\"background-color:#EFEFEF;\">0.88</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.16.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.16.4.1\" style=\"background-color:#EFEFEF;\">0.90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.16.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.16.5.1\" style=\"background-color:#EFEFEF;\">0.88</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.17.1\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.17.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1.17.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.17.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.17.1.1.1.1.1\">Yale</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.17.1.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.17.1.1.1.2.1\">Dataset (Y-MED)</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.17.2\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.17.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1.17.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.17.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.17.2.1.1.1.1\">Gaussian</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.17.2.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.17.2.1.1.2.1\">Naive Bayes</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.17.3\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.17.3.1\">40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.17.4\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.17.5\">0.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.17.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.17.6.1\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.17.7\">0.51</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.17.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.2.1.17.8.1\">1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.18\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.18.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.18.1.1\" style=\"background-color:#EFEFEF;\">2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.18.2\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.18.2.1\" style=\"background-color:#EFEFEF;\">0.56</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.18.3\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.18.3.1\" style=\"background-color:#EFEFEF;\">0.12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.18.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.18.4.1\" style=\"background-color:#EFEFEF;\">0.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.18.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.18.5.1\" style=\"background-color:#EFEFEF;\">0.12</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.19\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.19.1\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.19.2\">0.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.19.3\">0.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.19.4\">0.12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.19.5\">0.07</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.20.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.20.1.1\" style=\"background-color:#EFEFEF;\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.20.2\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.20.2.1\" style=\"background-color:#EFEFEF;\">0.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.20.3\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.20.3.1\" style=\"background-color:#EFEFEF;\">0.12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.20.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.20.4.1\" style=\"background-color:#EFEFEF;\">0.19</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.20.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.20.5.1\" style=\"background-color:#EFEFEF;\">0.12</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.21\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.21.1\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.21.2\">0.46</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.21.3\">0.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.21.4\">0.56</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.21.5\">0.70</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.22\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.22.1\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.22.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1.22.1.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.22.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.22.1.1.1.1.1\">Yale</span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.22.1.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.22.1.1.1.2.1\">Dataset (Y-MED)</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.22.2\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.22.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1.22.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T1.2.1.22.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.1.22.2.1.1.1.1\">SVM</span></span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.22.3\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.22.3.1\">38</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.22.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.22.4.1\" style=\"background-color:#EFEFEF;\">1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.22.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.22.5.1\" style=\"background-color:#EFEFEF;\">0.48</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.22.6\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.22.6.1\" style=\"background-color:#EFEFEF;\">0.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.22.7\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.22.7.1\" style=\"background-color:#EFEFEF;\">0.54</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.22.8\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.22.8.1\" style=\"background-color:#EFEFEF;\">0.61</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.23\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.23.1\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.23.2\">0.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.23.3\">0.33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.23.4\">0.36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.23.5\">0.33</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.24\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.24.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.24.1.1\" style=\"background-color:#EFEFEF;\">3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.24.2\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.24.2.1\" style=\"background-color:#EFEFEF;\">0.32</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.24.3\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.24.3.1\" style=\"background-color:#EFEFEF;\">0.09</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.24.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.24.4.1\" style=\"background-color:#EFEFEF;\">0.14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.24.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.24.5.1\" style=\"background-color:#EFEFEF;\">0.09</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.25\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.25.1\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.25.2\">0.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.25.3\">0.23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.25.4\">0.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.2.1.25.5\">0.23</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.26\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.2.1.26.1\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.26.1.1\" style=\"background-color:#EFEFEF;\">5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.2.1.26.2\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.26.2.1\" style=\"background-color:#EFEFEF;\">0.35</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.2.1.26.3\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.26.3.1\" style=\"background-color:#EFEFEF;\">0.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.2.1.26.4\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.26.4.1\" style=\"background-color:#EFEFEF;\">0.45</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.2.1.26.5\" style=\"background-color:#EFEFEF;\"><span class=\"ltx_text\" id=\"S4.T1.2.1.26.5.1\" style=\"background-color:#EFEFEF;\">0.63</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.3.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S4.T1.4.2\" style=\"font-size:90%;\">The table shows the statistical analysis of different Machine Learning techniques applied on two datasets - MIMIC and Y-MED. It is to be noted that both the datasets are standardized and augmented using SMOTE. The MLP, with an overall accuracy of 74%, outperforms other models trained.</span></figcaption>\n</figure>",
|
| 154 |
+
"capture": "TABLE I: The table shows the statistical analysis of different Machine Learning techniques applied on two datasets - MIMIC and Y-MED. It is to be noted that both the datasets are standardized and augmented using SMOTE. The MLP, with an overall accuracy of 74%, outperforms other models trained."
|
| 155 |
+
},
|
| 156 |
+
"2": {
|
| 157 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.2\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.1.1.1\">Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.1.2.1\">Technique</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.1.3.1\">Precision</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.1.4.1\">Recall</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.1.5.1\">F1 score</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.1.1\">ARTEMIS</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.2.2.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.2.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.2.1.1.1.1\">5-layer MLP</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.2.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.2.1.2.1.1\">(Proposed)</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.3.1\">0.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.4.1\">0.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.2.2.5.1\">0.74</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.3.1\">eUPU <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.08865v3#bib.bib21\" title=\"\">21</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.3.2\">Neural network</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.3.3\">0.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.3.4\">0.69</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.3.5\">0.70</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.4.1\">eUPU <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.08865v3#bib.bib21\" title=\"\">21</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.4.2\">Log. Regression</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.4.3\">0.65</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.4.4\">0.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.4.5\">0.50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.5.1\">eUPU <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.08865v3#bib.bib21\" title=\"\">21</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.5.2\">Random Forest</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.5.3\">0.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.5.4\">0.64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.5.5\">0.66</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.6.1\">KTAS <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.08865v3#bib.bib43\" title=\"\">43</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.6.2\">Log. Regression</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.6.3\">0.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.6.4\">0.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.6.5\">0.71</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.7.1\">KTAS <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.08865v3#bib.bib43\" title=\"\">43</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.7.2\">Random Forest</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.7.3\">0.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.7.4\">0.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.7.5\">0.73</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.8.1\">KATE <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.08865v3#bib.bib44\" title=\"\">44</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.8.2\">XGBoost</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.8.3\">0.72</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.8.4\">0.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.8.5\">0.69</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.9.1\">ED-Adm.</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.9.2\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.9.3\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.9.4\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.9.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.10.1\">(Triage) <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.08865v3#bib.bib45\" title=\"\">45</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.10.2\">XGBoost</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.10.3\">0.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.10.4\">0.69</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.10.5\">0.67</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.11.1\">ED-Adm.</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.11.2\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.11.3\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.11.4\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.11.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.12.1\">(Triage) <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.08865v3#bib.bib45\" title=\"\">45</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.12.2\">Log. Regression</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.12.3\">0.65</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.12.4\">0.68</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.12.5\">0.66</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S4.T2.2.13.1\">ED-Adm.</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.13.2\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.13.3\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.13.4\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S4.T2.2.13.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T2.2.14.1\">(Triage) <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.08865v3#bib.bib45\" title=\"\">45</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.2.14.2\">Log. Regression</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.2.14.3\">0.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.2.14.4\">0.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.2.14.5\">0.68</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T2.3.1.1\" style=\"font-size:90%;\">TABLE II</span>: </span><span class=\"ltx_text\" id=\"S4.T2.4.2\" style=\"font-size:90%;\">Results highlighting that ARTEMIS outperforms state-of-the-art models in acuity labeling.</span></figcaption>\n</figure>",
|
| 158 |
+
"capture": "TABLE II: Results highlighting that ARTEMIS outperforms state-of-the-art models in acuity labeling."
|
| 159 |
+
}
|
| 160 |
+
},
|
| 161 |
+
"image_paths": {
|
| 162 |
+
"1(a)": {
|
| 163 |
+
"figure_path": "2309.08865v3_figure_1(a).png",
|
| 164 |
+
"caption": "(a) Far away pose\nFigure 1: An overview of the collection of vitals, triage classification, and the intervention of first responders during Mass Casualty Incidents (MCI). ARTEMIS, uses machine learning to rapidly and accurately identify the acuity levels of the identified victims.",
|
| 165 |
+
"url": "http://arxiv.org/html/2309.08865v3/extracted/5491136/title2.png"
|
| 166 |
+
},
|
| 167 |
+
"1(b)": {
|
| 168 |
+
"figure_path": "2309.08865v3_figure_1(b).png",
|
| 169 |
+
"caption": "(b) Close-up pose\nFigure 1: An overview of the collection of vitals, triage classification, and the intervention of first responders during Mass Casualty Incidents (MCI). ARTEMIS, uses machine learning to rapidly and accurately identify the acuity levels of the identified victims.",
|
| 170 |
+
"url": "http://arxiv.org/html/2309.08865v3/extracted/5491136/title3.png"
|
| 171 |
+
},
|
| 172 |
+
"1(c)": {
|
| 173 |
+
"figure_path": "2309.08865v3_figure_1(c).png",
|
| 174 |
+
"caption": "(c) Far away pose\nFigure 1: An overview of the collection of vitals, triage classification, and the intervention of first responders during Mass Casualty Incidents (MCI). ARTEMIS, uses machine learning to rapidly and accurately identify the acuity levels of the identified victims.",
|
| 175 |
+
"url": "http://arxiv.org/html/2309.08865v3/extracted/5491136/title4.png"
|
| 176 |
+
},
|
| 177 |
+
"1(d)": {
|
| 178 |
+
"figure_path": "2309.08865v3_figure_1(d).png",
|
| 179 |
+
"caption": "(d) Close-up pose\nFigure 1: An overview of the collection of vitals, triage classification, and the intervention of first responders during Mass Casualty Incidents (MCI). ARTEMIS, uses machine learning to rapidly and accurately identify the acuity levels of the identified victims.",
|
| 180 |
+
"url": "http://arxiv.org/html/2309.08865v3/extracted/5491136/title5.png"
|
| 181 |
+
},
|
| 182 |
+
"2": {
|
| 183 |
+
"figure_path": "2309.08865v3_figure_2.png",
|
| 184 |
+
"caption": "Figure 2: ARTEMIS System Architecture: a Unitree Go1 quadruped to be used for collecting patient vitals (Heart Rate, Respiratory Rate, O2subscript\ud835\udc422O_{2}italic_O start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT Sat., Systolic and Diastolic Blood Pressure, etc.), chief complaints, and age. The Multi-Layer Perceptron (MLP) uses this to classify the patient\u2019s acuity level, which is then displayed on the GUI along with the patient\u2019s location and photograph.",
|
| 185 |
+
"url": "http://arxiv.org/html/2309.08865v3/x2.png"
|
| 186 |
+
},
|
| 187 |
+
"3": {
|
| 188 |
+
"figure_path": "2309.08865v3_figure_3.png",
|
| 189 |
+
"caption": "Figure 3: Example of robot demo run with an external view of the robot moving (top) and a camera view from the robot\u2019s perspective (bottom). The captures from what the robot is seeing are sent over to first responders via the GUI.",
|
| 190 |
+
"url": "http://arxiv.org/html/2309.08865v3/x3.png"
|
| 191 |
+
},
|
| 192 |
+
"4(a)": {
|
| 193 |
+
"figure_path": "2309.08865v3_figure_4(a).png",
|
| 194 |
+
"caption": "(a) SVM\nFigure 4: This figure highlights the tendency of the models to incorrectly classify a triage level. Most of the incorrect labels for d) MLP are adjacent to the true label (83.4% of all mis-classified labels).",
|
| 195 |
+
"url": "http://arxiv.org/html/2309.08865v3/extracted/5491136/SVMovo_confusion_yale_smote.png"
|
| 196 |
+
},
|
| 197 |
+
"4(b)": {
|
| 198 |
+
"figure_path": "2309.08865v3_figure_4(b).png",
|
| 199 |
+
"caption": "(b) Gaussian Naive Bayes\nFigure 4: This figure highlights the tendency of the models to incorrectly classify a triage level. Most of the incorrect labels for d) MLP are adjacent to the true label (83.4% of all mis-classified labels).",
|
| 200 |
+
"url": "http://arxiv.org/html/2309.08865v3/extracted/5491136/NaiveBayes_confusion_yale_smote.png"
|
| 201 |
+
},
|
| 202 |
+
"4(c)": {
|
| 203 |
+
"figure_path": "2309.08865v3_figure_4(c).png",
|
| 204 |
+
"caption": "(c) Random Forest\nFigure 4: This figure highlights the tendency of the models to incorrectly classify a triage level. Most of the incorrect labels for d) MLP are adjacent to the true label (83.4% of all mis-classified labels).",
|
| 205 |
+
"url": "http://arxiv.org/html/2309.08865v3/extracted/5491136/RF_confusion_yale_smote.png"
|
| 206 |
+
},
|
| 207 |
+
"4(d)": {
|
| 208 |
+
"figure_path": "2309.08865v3_figure_4(d).png",
|
| 209 |
+
"caption": "(d) MLP\nFigure 4: This figure highlights the tendency of the models to incorrectly classify a triage level. Most of the incorrect labels for d) MLP are adjacent to the true label (83.4% of all mis-classified labels).",
|
| 210 |
+
"url": "http://arxiv.org/html/2309.08865v3/extracted/5491136/NN_confusion_yale_smote.png"
|
| 211 |
+
},
|
| 212 |
+
"5": {
|
| 213 |
+
"figure_path": "2309.08865v3_figure_5.png",
|
| 214 |
+
"caption": "Figure 5: A simulated MCI environment in Gazebo depicting victim detection is shown. This is done using CHAMP quadruped framework in ROS1 [42].",
|
| 215 |
+
"url": "http://arxiv.org/html/2309.08865v3/extracted/5491136/gazebo.png"
|
| 216 |
+
}
|
| 217 |
+
},
|
| 218 |
+
"validation": true,
|
| 219 |
+
"references": [],
|
| 220 |
+
"url": "http://arxiv.org/html/2309.08865v3"
|
| 221 |
+
}
|
20240323/2309.09469v2.json
ADDED
|
@@ -0,0 +1,369 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Spiking-LEAF: A Learnable Auditory front-end for Spiking Neural Networks",
|
| 3 |
+
"abstract": "Brain-inspired spiking neural networks (SNNs) have demonstrated great potential for temporal signal processing. However, their performance in speech processing remains limited due to the lack of an effective auditory front-end. To address this limitation, we introduce Spiking-LEAF, a learnable auditory front-end meticulously designed for SNN-based speech processing. Spiking-LEAF combines a learnable filter bank with a novel two-compartment spiking neuron model called IHC-LIF. The IHC-LIF neurons draw inspiration from the structure of inner hair cells (IHC) and they leverage segregated dendritic and somatic compartments to effectively capture multi-scale temporal dynamics of speech signals. Additionally, the IHC-LIF neurons incorporate the lateral feedback mechanism along with spike regularization loss to enhance spike encoding efficiency. On keyword spotting and speaker identification tasks, the proposed Spiking-LEAF outperforms both SOTA spiking auditory front-ends and conventional real-valued acoustic features in terms of classification accuracy, noise robustness, and encoding efficiency.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Recently, the brain-inspired spiking neural networks (SNNs) have demonstrated superior performance in sequential modeling [1 ###reference_b1###, 2 ###reference_b2###]. However, their performance in speech processing tasks still lags behind that of state-of-the-art (SOTA) non-spiking artificial neural networks (ANNs) [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###]. This is primarily due to the lack of an effective auditory front-end that can synergistically perform acoustic feature extraction and neural encoding with high efficacy and efficiency.\nThe existing SNN-based auditory front-ends first extract acoustic features from raw audio signals, followed by encoding these real-valued acoustic features into spike patterns that can be processed by the SNN. For feature extraction, many works directly adopt the frequently used acoustic features based on the Mel-scaled filter-bank [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###] or the GammaTone filter-bank [12 ###reference_b12###]. Despite the simplicity of this approach, these handcrafted filter-bank are found to be suboptimal in many tasks when compared to learnable filter-bank [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###].\nIn another vein of research, recent works have also looked into the neurophysiological process happening in the peripheral auditory system and developed more complex biophysical models to enhance the effectiveness of feature extraction [17 ###reference_b17###, 18 ###reference_b18###]. However, these methods not only require fine-tuning a large number of hyperparameters but are also computationally expensive for resource-constrained neuromorphic platforms.\nFor neural encoding, several methods have been proposed that follow the neurophysiological processes within the cochlea [17 ###reference_b17###, 18 ###reference_b18###]. For instance, Cramer et al. proposed a biologically inspired cochlear model with the model parameters directly taken from biological studies [17 ###reference_b17###]. Additionally, other methods propose to encode the temporal variations of the speech signals that are critical for speech recognition. The Send on Delta (SOD) [19 ###reference_b19###] and threshold coding methods [12 ###reference_b12###, 20 ###reference_b20###, 21 ###reference_b21###], for instance, encode the positive and negative variations of signal amplitude into spike trains. However, these neural encoding methods lack many essential characteristics as seen in the human\u2019s peripheral auditory system that are known to be important for speech processing, such as feedback adaptation [22 ###reference_b22###].\nTo address these limitations, we introduce a Spiking LEarnable Audio front-end model, called Spiking-LEAF. The Spiking-LEAF leverages a learnable auditory filter-bank to extract discriminative acoustic features. Furthermore, inspired by the structure and dynamics of the inner hair cells (IHCs) within the cochlea, we further proposed a two-compartment neuron model for neural encoding, namely IHC-LIF neuron. Its two neuronal compartments work synergistically to capture the multi-scale temporal dynamics of speech signals. Additionally, the lateral inhibition mechanism along with spike regularization loss is incorporated to enhance the encoding efficiency. The main contributions of this paper can be summarized as follows:\nWe propose a learnable auditory front-end for SNNs, enabling the joint optimization of feature extraction and neural encoding processes to achieve optimal performance in the given task.\nWe propose a two-compartment spiking neuron model for neural encoding, called IHC-LIF, which can effectively extract multi-scale temporal information with high efficiency and noise robustness.\nOur proposed Spiking-LEAF shows high classification accuracy, noise robustness, and encoding efficiency on both keyword spotting and speaker identification tasks.\n###figure_1###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Methods",
|
| 15 |
+
"text": "As shown in Fig. 1 ###reference_###, similar to other existing auditory front-ends, the proposed Spiking-LEAF model consists of two parts responsible for feature extraction and neural encoding, respectively. For feature extraction, we apply the Gabor 1d-convolution filter bank along with the Per-Channel Energy Normalization (PCEN) to perform frequency analysis. Subsequently, the extracted acoustic feature is processed by the IHC-LIF neurons for neural encoding. Given that both the feature extraction and neural encoding parts are parameterized, they can be optimized jointly with the backend SNN classifier."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Parameterized acoustic feature extraction",
|
| 21 |
+
"text": "In Spiking-LEAF, the feature extraction is performed with a 1d-convolution Gabor filter bank along with the PCEN that is tailored for dynamic range compression [23 ###reference_b23###]. The Gabor 1d-convolution filters have been widely used in speech processing [24 ###reference_b24###, 16 ###reference_b16###], and its formulation can be expressed as per:\nwhere and denote learnable parameters that characterize the center frequency and bandwidth of filter n, respectively. In particular, for input audio with a sampling rate of 16 kHz, there are a total of 40 convolution filters, with a window length of 25ms ranging over ( samples), have been employed in Spiking-LEAF.\nThese 1d-convolution filters are applied directly to the audio waveform to get the time-frequency representation .\nFollowing the neurophysiological process in the peripheral auditory system, the PCEN [16 ###reference_b16###, 23 ###reference_b23###] has been applied subsequently to further compress the dynamic range of the obtained acoustic features:\nIn Eqs. 2 and 3, represents the time-frequency representation for channel at time step . and are coefficients that control the compression rate. The term is the moving average of the time-frequency feature with a smoothing rate of . Meanwhile, and stands for a positive offset introduced specifically to prevent the occurrence of imaginary numbers in PCEN."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Two-compartment spiking neuron model",
|
| 27 |
+
"text": "###figure_2### The Leaky Integrate-and-Fire (LIF) neuron model [25 ###reference_b25###], with a single neuronal compartment, has been widely used in brain simulation and neuromorphic computing [3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 7 ###reference_b7###, 8 ###reference_b8###]. The internal operations of a LIF neuron, as illustrated in Fig. 2 ###reference_### (a), can be expressed by the following discrete-time formulation:\nwhere represents the input spike at time step . and denote the transduced synaptic current and membrane potential, respectively. is the membrane decaying constant that governs the information decaying rate within the LIF neuron. As the Heaviside step function indicated in Eq. 6 ###reference_###, once the membrane potential exceeds the firing threshold , an output spike will be emitted.\nDespite its ubiquity and simplicity, the LIF model possesses inherent limitations when it comes to long-term information storage. These limitations arise from two main factors: the exponential leakage of its membrane potential and the resetting mechanism. These factors significantly affect the model\u2019s efficacy in sequential modeling. Motivated by the intricate structure of biological neurons, recent work has developed a two-compartment spiking neuron model, called TC-LIF, to address the limitations of the LIF neuron [26 ###reference_b26###]. The neuronal dynamics of TC-LIF neurons are given as follows:\nwhere and represent the membrane potential of the dendritic and somatic compartments. The and are two learnable parameters that govern the interaction between dendritic and somatic compartments. Facilitated by the synergistic interaction between these two neuronal compartments, TC-LIF can retain both short-term and long-term information which is crucial for effective speech processing [26 ###reference_b26###]."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "IHC-LIF neurons with lateral feedback",
|
| 33 |
+
"text": "Neuroscience studies reveal that lateral feedback connections are pervasive in the peripheral auditory system, and they play an essential role in adjusting frequency sensitivity of auditory neurons [27 ###reference_b27###]. Inspired by this finding, as depicted in Figure 2 ###reference_### (b), we further incorporate lateral feedback components into the dendritic compartment and somatic compartment of the TC-LIF neuron, represented by and respectively. Specifically, each output spike will modulate the neighboring frequency bands with learnable weight matrices and , whose diagonal entries are all zeros.\nThe lateral inhibition feedback of hair cells within the cochlea is found to detect sounds below the thermal noise level and in the presence of noise or masking sounds [28 ###reference_b28###, 29 ###reference_b29###]. Motivated by this finding, we further constrain the weight matrix to enforce lateral inhibitory feedback at the somatic compartment, which is responsible for spike generation. This will suppress the activity of neighboring neurons after the spike generation, amplifying the signal of the most activated neuron while suppressing other neurons. This results in a sparse yet informative spike representation of input signals. The neuronal dynamics of the resulting IHC-LIF model can be described as follows:\nTo further enhance the encoding efficiency, we incorporate a spike rate regularization term into the loss function . It has been applied alongside the classification loss where . Here, represents the average spike rate per neuron per timestep and denotes the expected spike rate. Any spike rate higher than will incur a penalty, and is the penalty coefficient."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "Experimental Results",
|
| 39 |
+
"text": "In this section, we evaluate our model on keyword spotting (KWS) and speaker identification task. For keyword spotting, we use Google Speech Command Dataset V2 [31 ###reference_b31###], which contains 105,829 one-second utterances of 35 commands. For speaker identification (SI), we use the Voxceleb1 dataset [32 ###reference_b32###] with 153,516 utterances from 1,251 speakers, resulting in a classification task with 1,251 classes. We focus our evaluations on the auditory front-end by keeping model architecture and hyperparameters of the backend SNN classifier fixed."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.1",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "Superior feature representation",
|
| 45 |
+
"text": "Table 1 ###reference_### compares our proposed Spiking-LEAF model with other existing auditory front-ends on both KWS and SI tasks. Our results reveal that the Spiking-LEAF consistently outperforms the SOTA spike encoding methods as well as the fbank features [3 ###reference_b3###], demonstrating a superior feature representation power. In the following section, we validate the effectiveness of key components of Spiking-LEAF: learnable acoustic feature extraction, two-compartment LIF (TC-LIF) neuron model, lateral feedback , lateral inhibition , and firing rate regulation loss ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Ablation studies",
|
| 51 |
+
"text": "Learnable filter bank and two-compartment neuron. As illustrated in row 1 and row 2 of Table 2 ###reference_###, the proposed learnable filter bank achieves substantial enhancement in feature representation when compared to the widely adopted Fbank feature. Notably, further improvements in classification accuracy are observed (see row 3) when replacing LIF neurons with TC-LIF neurons that offer richer neuronal dynamics. However, it is important to acknowledge that this improvement comes at the expense of an elevated firing rate, which has a detrimental effect on the encoding efficiency.\n###figure_3### Lateral feedback. Row 4 and row 5 of Table 2 ###reference_### highlight the potential of lateral feedback mechanisms in enhancing classification accuracy, which can be explained by the enhanced frequency sensitivity facilitated by the lateral feedback. Furthermore, the incorporation of lateral feedback is also anticipated to enhance the neuron\u2019s robustness in noisy environments. To substantiate this claim, our model is trained on clean samples and subsequently tested on noisy test samples contaminated with noise from the NOISEX-92 [33 ###reference_b33###] and CHiME-3 [34 ###reference_b34###] datasets. Fig. 3 ###reference_### illustrates the results of this evaluation, demonstrating that both the learnable filter bank and lateral feedback mechanisms contribute to enhanced noise robustness. This observation aligns with prior studies that have elucidated the role of the PCEN in fostering noise robustness [16 ###reference_b16###]. Simultaneously, Fig. 4 ###reference_### showcases how the lateral feedback aids in filtering out unwanted spikes.\n###figure_4### Lateral inhibition and spike rate regularization loss. As seen in Fig. 4 ###reference_### (b), when the spike regularization loss and lateral inhibition are not applied, the output spike representation involves a substantial amount of noise during non-speech periods. Introducing lateral inhibition or spike regularization loss alone can not suppress the noise that appeared during such periods (Figs. (b) and (c)). Particularly, introducing the spike regularization loss alone results in a uniform reduction in the output spikes (Fig. 4 ###reference_### (d)). However, this comes along with a notable reduction in accuracy as highlighted in Table 2 ###reference_### row 6. Notably, the combination of lateral inhibition and spike rate regularization (Fig. 4 ###reference_### (e)) can effectively suppress the unwanted spike during non-speech periods, yielding a sparse and yet informative spike representation."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Conclusion",
|
| 57 |
+
"text": "In this paper, we presented a fully learnable audio front-end for SNN-based speech processing, dubbed Spiking-LEAF. The Spiking-LEAF integrated a learnable filter bank with a novel IHC-LIF neuron model to achieve effective feature extraction and neural encoding. Our experimental evaluation on KWS and SI tasks demonstrated enhanced feature representation power, noise robustness, and encoding efficiency over SOTA auditory front-ends. It, therefore, opens up a myriad of opportunities for ultra-low-power speech processing at the edge with neuromorphic solutions."
|
| 58 |
+
}
|
| 59 |
+
],
|
| 60 |
+
"appendix": [],
|
| 61 |
+
"tables": {
|
| 62 |
+
"1": {
|
| 63 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T1.2\" style=\"width:433.6pt;height:353.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(25.8pt,-21.0pt) scale(1.1350463203555,1.1350463203555) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.2.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.2.1.1.1\">Tasks</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.2.1.1.2\">Front-end</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.1.3\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.2.1.1.3.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.1.3.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.1.3.1.1.1\">Classifier</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.1.3.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.1.3.1.2.1\">Structure</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.1.4\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.2.1.1.4.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.1.4.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.1.4.1.1.1\">Classifier</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.1.4.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.1.4.1.2.1\">Type</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.1.5\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.2.1.1.5.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.1.5.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.1.5.1.1.1\">Test</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.1.5.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.1.5.1.2.1\">Accuracy (%)</td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.2\">\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.2.1.2.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.2.1.2.2\">Fbank <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.09469v2#bib.bib3\" title=\"\">3</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.2.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.2.4\">Feedforward</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.2.5\">83.03</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.3\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.3.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.3.2\">Fbank+LIF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.3.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.3.4\">Feedforward</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.3.5\">85.24</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.4\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.4.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.4.2\">Heidelberg<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.09469v2#bib.bib17\" title=\"\">17</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.4.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.4.4\">Feedforward</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.4.5\">68.14</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.5\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.5.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.5.2.1\">Spiking-LEAF</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.5.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.5.4\">Feedforward</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.5.5.1\">92.24</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.6\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.6.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.6.2\">Speech2spike <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.09469v2#bib.bib30\" title=\"\">30</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.6.3\">256-256-256</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.6.4\">Feedforward</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.6.5\">88.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.7\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.7.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.7.2.1\">Spiking-LEAF</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.7.3\">256-256-256</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.7.4\">Feedforward</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.7.5.1\">90.47</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.8\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.8.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.2.1.8.2\">Fbank <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.09469v2#bib.bib3\" title=\"\">3</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.8.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.8.4\">Recurrent</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.8.5\">93.58</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.9\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.9.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.9.2\">Fbank+LIF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.9.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.9.4\">Recurrent</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.9.5\">92.04</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.10.1\" rowspan=\"-9\"><span class=\"ltx_text\" id=\"S2.T1.2.1.10.1.1\">KWS</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.10.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.10.2.1\">Spiking-LEAF</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.10.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.10.4\">Recurrent</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.10.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.10.5.1\">93.95</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.11\">\n<td class=\"ltx_td ltx_border_t\" id=\"S2.T1.2.1.11.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.2.1.11.2\">Fbank</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.11.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.11.4\">Feedforward</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.11.5\">29.42</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.12\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.12.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.12.2\">Fbank+LIF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.12.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.12.4\">Feedforward</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.12.5\">27.23</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.13\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.13.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.13.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.13.2.1\">Spiking-LEAF</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.13.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.13.4\">Feedforward</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.13.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.13.5.1\">30.17</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.14\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.14.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.2.1.14.2\">Fbank</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.14.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.14.4\">Recurrent</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.2.1.14.5\">31.76</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.15\">\n<td class=\"ltx_td\" id=\"S2.T1.2.1.15.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.1.15.2\">Fbank+LIF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.15.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.15.4\">Recurrent</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.2.1.15.5\">29.74</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.1.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S2.T1.2.1.16.1\" rowspan=\"-6\"><span class=\"ltx_text\" id=\"S2.T1.2.1.16.1.1\">SI</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S2.T1.2.1.16.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.16.2.1\">Spiking-LEAF</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.2.1.16.3\">512-512</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.2.1.16.4\">Recurrent</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.2.1.16.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.2.1.16.5.1\">32.45</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.3.1.1\">Table 1</span>: </span>Comparison of different auditory front-ends on KWS and SI tasks. The bold color denotes the best model for each network configuration.</figcaption>\n</figure>",
|
| 64 |
+
"capture": "Table 1: Comparison of different auditory front-ends on KWS and SI tasks. The bold color denotes the best model for each network configuration."
|
| 65 |
+
},
|
| 66 |
+
"2": {
|
| 67 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T2.3\" style=\"width:433.6pt;height:247.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(90.5pt,-51.6pt) scale(1.71656594271344,1.71656594271344) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S3.T2.3.3\">\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.3.4\">Acoustic features</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.3.5\">Neuron type</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.3.6\">Firing rate</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.3.7\">Accuracy</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.4.1\">Fbank</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.4.2\">LIF</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.4.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.4.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.4.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.4.6\">17.94%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.3.3.4.7\">85.24%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.5.1\">Learnable</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.5.2\">LIF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.5.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.5.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.5.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.5.6\">18.25%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.5.7\">90.73%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.6.1\">Learnable</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.6.2\">TC-LIF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.6.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.6.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.6.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.6.6\">34.21%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.6.7\">91.89%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.7.1\">Learnable</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.7.2\">TC-LIF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.7.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.7.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.7.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.7.6\">40.35%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.7.7\">92.24%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.8.1\">Learnable</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.8.2\">TC-LIF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.8.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.8.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.8.5\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.8.6\">34.54%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.8.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.3.3.8.7.1\">92.43%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.9.1\">Learnable</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.9.2\">TC-LIF</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.9.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.9.4\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.9.5\">\u2713</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.9.6\">15.03%</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T2.3.3.9.7\">90.82%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.3.3.10.1\">Learnable</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.3.3.10.2\">TC-LIF</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.3.3.10.3\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.3.3.10.4\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.3.3.10.5\">\u2713</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.3.3.10.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.3.3.10.6.1\">11.96%</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S3.T2.3.3.10.7\">92.04%</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1\">Table 2</span>: </span>Ablation studies of various components of the proposed Spiking-LEAF model on the KWS task.\n</figcaption>\n</figure>",
|
| 68 |
+
"capture": "Table 2: Ablation studies of various components of the proposed Spiking-LEAF model on the KWS task.\n"
|
| 69 |
+
}
|
| 70 |
+
},
|
| 71 |
+
"image_paths": {
|
| 72 |
+
"1": {
|
| 73 |
+
"figure_path": "2309.09469v2_figure_1.png",
|
| 74 |
+
"caption": "Fig. 1: The overall architecture of the proposed SNN-based speech processing framework.",
|
| 75 |
+
"url": "http://arxiv.org/html/2309.09469v2/extracted/5490259/images/frontend_diagram.png"
|
| 76 |
+
},
|
| 77 |
+
"2": {
|
| 78 |
+
"figure_path": "2309.09469v2_figure_2.png",
|
| 79 |
+
"caption": "Fig. 2: Computational graphs of LIF and IHC-LIF neurons.",
|
| 80 |
+
"url": "http://arxiv.org/html/2309.09469v2/extracted/5490259/images/IHC-LIF_neuron_1.png"
|
| 81 |
+
},
|
| 82 |
+
"3": {
|
| 83 |
+
"figure_path": "2309.09469v2_figure_3.png",
|
| 84 |
+
"caption": "Fig. 3: Test accuracy on the KWS task with varying SNRs.",
|
| 85 |
+
"url": "http://arxiv.org/html/2309.09469v2/extracted/5490259/images/SNR.png"
|
| 86 |
+
},
|
| 87 |
+
"4": {
|
| 88 |
+
"figure_path": "2309.09469v2_figure_4.png",
|
| 89 |
+
"caption": "Fig. 4: This figure illustrates the Fbank feature and spike representation generated by Spiking-LEAF without and with lateral inhibition and spike rate regularization loss.",
|
| 90 |
+
"url": "http://arxiv.org/html/2309.09469v2/extracted/5490259/images/rep_1.png"
|
| 91 |
+
}
|
| 92 |
+
},
|
| 93 |
+
"validation": true,
|
| 94 |
+
"references": [
|
| 95 |
+
{
|
| 96 |
+
"1": {
|
| 97 |
+
"title": "\u201cAccurate and efficient time-domain classification with adaptive\nspiking recurrent neural networks,\u201d",
|
| 98 |
+
"author": "Bojian Yin, Federico Corradi, and Sander M Boht\u00e9,",
|
| 99 |
+
"venue": "Nature Machine Intelligence, vol. 3, no. 10, pp. 905\u2013913,\n2021.",
|
| 100 |
+
"url": null
|
| 101 |
+
}
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"2": {
|
| 105 |
+
"title": "\u201cSpatial-temporal self-attention for asynchronous spiking neural\nnetworks,\u201d",
|
| 106 |
+
"author": "Yuchen Wang, Kexin Shi, Chengzhuo Lu, Yuguo Liu, Malu Zhang, and Hong Qu,",
|
| 107 |
+
"venue": "in Proceedings of the Thirty-Second International Joint\nConference on Artificial Intelligence, IJCAI-23, Edith Elkind, Ed. 8 2023,\npp. 3085\u20133093, International Joint Conferences on Artificial Intelligence\nOrganization,",
|
| 108 |
+
"url": null
|
| 109 |
+
}
|
| 110 |
+
},
|
| 111 |
+
{
|
| 112 |
+
"3": {
|
| 113 |
+
"title": "\u201cA surrogate gradient spiking baseline for speech command\nrecognition,\u201d",
|
| 114 |
+
"author": "Alexandre Bittar and Philip N Garner,",
|
| 115 |
+
"venue": "Frontiers in Neuroscience, vol. 16, pp. 865897, 2022.",
|
| 116 |
+
"url": null
|
| 117 |
+
}
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"4": {
|
| 121 |
+
"title": "\u201cDeep spiking neural networks for large vocabulary automatic speech\nrecognition,\u201d",
|
| 122 |
+
"author": "Jibin Wu, Emre Y\u0131lmaz, Malu Zhang, Haizhou Li, and Kay Chen Tan,",
|
| 123 |
+
"venue": "Frontiers in neuroscience, vol. 14, pp. 199, 2020.",
|
| 124 |
+
"url": null
|
| 125 |
+
}
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"5": {
|
| 129 |
+
"title": "\u201cA spiking neural network framework for robust sound\nclassification,\u201d",
|
| 130 |
+
"author": "Jibin Wu, Yansong Chua, Malu Zhang, Haizhou Li, and Kay Chen Tan,",
|
| 131 |
+
"venue": "Frontiers in neuroscience, vol. 12, pp. 836, 2018.",
|
| 132 |
+
"url": null
|
| 133 |
+
}
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"6": {
|
| 137 |
+
"title": "\u201cA biologically plausible speech recognition framework based on\nspiking neural networks,\u201d",
|
| 138 |
+
"author": "Jibin Wu, Yansong Chua, and Haizhou Li,",
|
| 139 |
+
"venue": "in 2018 international joint conference on neural networks\n(IJCNN). IEEE, 2018, pp. 1\u20138.",
|
| 140 |
+
"url": null
|
| 141 |
+
}
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"7": {
|
| 145 |
+
"title": "\u201cNeural population coding for effective temporal classification,\u201d",
|
| 146 |
+
"author": "Zihan Pan, Jibin Wu, Malu Zhang, Haizhou Li, and Yansong Chua,",
|
| 147 |
+
"venue": "in 2019 International Joint Conference on Neural Networks\n(IJCNN). IEEE, 2019, pp. 1\u20138.",
|
| 148 |
+
"url": null
|
| 149 |
+
}
|
| 150 |
+
},
|
| 151 |
+
{
|
| 152 |
+
"8": {
|
| 153 |
+
"title": "\u201cAn efficient threshold-driven aggregate-label learning algorithm\nfor multimodal information processing,\u201d",
|
| 154 |
+
"author": "Malu Zhang, Xiaoling Luo, Yi Chen, Jibin Wu, Ammar Belatreche, Zihan Pan, Hong\nQu, and Haizhou Li,",
|
| 155 |
+
"venue": "IEEE Journal of Selected Topics in Signal Processing, vol. 14,\nno. 3, pp. 592\u2013602, 2020.",
|
| 156 |
+
"url": null
|
| 157 |
+
}
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"9": {
|
| 161 |
+
"title": "\u201cProgressive tandem learning for pattern recognition with deep\nspiking neural networks,\u201d",
|
| 162 |
+
"author": "Jibin Wu, Chenglin Xu, Xiao Han, Daquan Zhou, Malu Zhang, Haizhou Li, and\nKay Chen Tan,",
|
| 163 |
+
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,\nvol. 44, no. 11, pp. 7824\u20137840, 2021.",
|
| 164 |
+
"url": null
|
| 165 |
+
}
|
| 166 |
+
},
|
| 167 |
+
{
|
| 168 |
+
"10": {
|
| 169 |
+
"title": "\u201cFactorizing Knowledge in Neural Networks,\u201d",
|
| 170 |
+
"author": "Xingyi Yang, Jingwen Ye, and Xinchao Wang,",
|
| 171 |
+
"venue": "in European Conference on Computer Vision, 2022.",
|
| 172 |
+
"url": null
|
| 173 |
+
}
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"11": {
|
| 177 |
+
"title": "\u201cLLM-Pruner: On the Structural Pruning of Large Language Models,\u201d",
|
| 178 |
+
"author": "Xinyin Ma, Gongfan Fang, and Xinchao Wang,",
|
| 179 |
+
"venue": "in Advances in neural information processing systems, 2023.",
|
| 180 |
+
"url": null
|
| 181 |
+
}
|
| 182 |
+
},
|
| 183 |
+
{
|
| 184 |
+
"12": {
|
| 185 |
+
"title": "\u201cAn efficient and perceptually motivated auditory neural encoding\nand decoding algorithm for spiking neural networks,\u201d",
|
| 186 |
+
"author": "Zihan Pan, Yansong Chua, Jibin Wu, Malu Zhang, Haizhou Li, and Eliathamby\nAmbikairajah,",
|
| 187 |
+
"venue": "Frontiers in neuroscience, vol. 13, pp. 1420, 2020.",
|
| 188 |
+
"url": null
|
| 189 |
+
}
|
| 190 |
+
},
|
| 191 |
+
{
|
| 192 |
+
"13": {
|
| 193 |
+
"title": "\u201cLearning the speech front-end with raw waveform cldnns,\u201d",
|
| 194 |
+
"author": "Tara Sainath, Ron J Weiss, Kevin Wilson, Andrew W Senior, and Oriol Vinyals,",
|
| 195 |
+
"venue": "2015.",
|
| 196 |
+
"url": null
|
| 197 |
+
}
|
| 198 |
+
},
|
| 199 |
+
{
|
| 200 |
+
"14": {
|
| 201 |
+
"title": "\u201cSpeech acoustic modeling from raw multichannel waveforms,\u201d",
|
| 202 |
+
"author": "Yedid Hoshen, Ron J Weiss, and Kevin W Wilson,",
|
| 203 |
+
"venue": "in 2015 IEEE international conference on acoustics, speech and\nsignal processing (ICASSP). IEEE, 2015, pp. 4624\u20134628.",
|
| 204 |
+
"url": null
|
| 205 |
+
}
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"15": {
|
| 209 |
+
"title": "\u201cSpeaker recognition from raw waveform with sincnet,\u201d",
|
| 210 |
+
"author": "Mirco Ravanelli and Yoshua Bengio,",
|
| 211 |
+
"venue": "in 2018 IEEE Spoken Language Technology Workshop (SLT). IEEE,\n2018, pp. 1021\u20131028.",
|
| 212 |
+
"url": null
|
| 213 |
+
}
|
| 214 |
+
},
|
| 215 |
+
{
|
| 216 |
+
"16": {
|
| 217 |
+
"title": "\u201cLeaf: A learnable frontend for audio classification,\u201d",
|
| 218 |
+
"author": "Neil Zeghidour, Olivier Teboul, F\u00e9lix de Chaumont Quitry, and Marco\nTagliasacchi,",
|
| 219 |
+
"venue": "arXiv preprint arXiv:2101.08596, 2021.",
|
| 220 |
+
"url": null
|
| 221 |
+
}
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"17": {
|
| 225 |
+
"title": "\u201cThe heidelberg spiking data sets for the systematic evaluation of\nspiking neural networks,\u201d",
|
| 226 |
+
"author": "Benjamin Cramer, Yannik Stradmann, Johannes Schemmel, and Friedemann Zenke,",
|
| 227 |
+
"venue": "IEEE Transactions on Neural Networks and Learning Systems, vol.\n33, no. 7, pp. 2744\u20132757, 2020.",
|
| 228 |
+
"url": null
|
| 229 |
+
}
|
| 230 |
+
},
|
| 231 |
+
{
|
| 232 |
+
"18": {
|
| 233 |
+
"title": "\u201cA learning theory for reward-modulated spike-timing-dependent\nplasticity with application to biofeedback,\u201d",
|
| 234 |
+
"author": "Robert Legenstein, Dejan Pecevski, and Wolfgang Maass,",
|
| 235 |
+
"venue": "PLoS computational biology, vol. 4, no. 10, pp. e1000180, 2008.",
|
| 236 |
+
"url": null
|
| 237 |
+
}
|
| 238 |
+
},
|
| 239 |
+
{
|
| 240 |
+
"19": {
|
| 241 |
+
"title": "\u201cSend-on-delta concept: An event-based data reporting strategy,\u201d",
|
| 242 |
+
"author": "Marek Miskowicz,",
|
| 243 |
+
"venue": "sensors, vol. 6, no. 1, pp. 49\u201363, 2006.",
|
| 244 |
+
"url": null
|
| 245 |
+
}
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"20": {
|
| 249 |
+
"title": "\u201cMpd-al: an efficient membrane potential driven aggregate-label\nlearning algorithm for spiking neurons,\u201d",
|
| 250 |
+
"author": "Malu Zhang, Jibin Wu, Yansong Chua, Xiaoling Luo, Zihan Pan, Dan Liu, and\nHaizhou Li,",
|
| 251 |
+
"venue": "in Proceedings of the AAAI conference on artificial\nintelligence, 2019, vol. 33, pp. 1327\u20131334.",
|
| 252 |
+
"url": null
|
| 253 |
+
}
|
| 254 |
+
},
|
| 255 |
+
{
|
| 256 |
+
"21": {
|
| 257 |
+
"title": "\u201cBsa, a fast and accurate spike train encoding scheme,\u201d",
|
| 258 |
+
"author": "Benjamin Schrauwen and Jan Van Campenhout,",
|
| 259 |
+
"venue": "in Proceedings of the International Joint Conference on Neural\nNetworks, 2003. IEEE, 2003, vol. 4, pp. 2825\u20132830.",
|
| 260 |
+
"url": null
|
| 261 |
+
}
|
| 262 |
+
},
|
| 263 |
+
{
|
| 264 |
+
"22": {
|
| 265 |
+
"title": "Neuroscience: exploring the brain, enhanced edition: exploring\nthe brain,",
|
| 266 |
+
"author": "Mark Bear, Barry Connors, and Michael A Paradiso,",
|
| 267 |
+
"venue": "Jones & Bartlett Learning, 2020.",
|
| 268 |
+
"url": null
|
| 269 |
+
}
|
| 270 |
+
},
|
| 271 |
+
{
|
| 272 |
+
"23": {
|
| 273 |
+
"title": "\u201cTrainable frontend for robust and far-field keyword spotting,\u201d",
|
| 274 |
+
"author": "Yuxuan Wang, Pascal Getreuer, Thad Hughes, Richard F Lyon, and Rif A Saurous,",
|
| 275 |
+
"venue": "in 2017 IEEE International Conference on Acoustics, Speech and\nSignal Processing (ICASSP). IEEE, 2017, pp. 5670\u20135674.",
|
| 276 |
+
"url": null
|
| 277 |
+
}
|
| 278 |
+
},
|
| 279 |
+
{
|
| 280 |
+
"24": {
|
| 281 |
+
"title": "\u201cLearning filterbanks from raw speech for phone recognition,\u201d",
|
| 282 |
+
"author": "Neil Zeghidour, Nicolas Usunier, Iasonas Kokkinos, Thomas Schaiz, Gabriel\nSynnaeve, and Emmanuel Dupoux,",
|
| 283 |
+
"venue": "in 2018 IEEE international conference on acoustics, speech and\nsignal Processing (ICASSP). IEEE, 2018, pp. 5509\u20135513.",
|
| 284 |
+
"url": null
|
| 285 |
+
}
|
| 286 |
+
},
|
| 287 |
+
{
|
| 288 |
+
"25": {
|
| 289 |
+
"title": "Spiking neuron models: Single neurons, populations, plasticity,",
|
| 290 |
+
"author": "Wulfram Gerstner and Werner M Kistler,",
|
| 291 |
+
"venue": "Cambridge university press, 2002.",
|
| 292 |
+
"url": null
|
| 293 |
+
}
|
| 294 |
+
},
|
| 295 |
+
{
|
| 296 |
+
"26": {
|
| 297 |
+
"title": "\u201cLong short-term memory with two-compartment spiking neuron,\u201d",
|
| 298 |
+
"author": "Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, and Kay Chen Tan,",
|
| 299 |
+
"venue": "arXiv preprint arXiv:2307.07231, 2023.",
|
| 300 |
+
"url": null
|
| 301 |
+
}
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"27": {
|
| 305 |
+
"title": "\u201cIntegrating the active process of hair cells with cochlear\nfunction,\u201d",
|
| 306 |
+
"author": "AJ Hudspeth,",
|
| 307 |
+
"venue": "Nature Reviews Neuroscience, vol. 15, no. 9, pp. 600\u2013614,\n2014.",
|
| 308 |
+
"url": null
|
| 309 |
+
}
|
| 310 |
+
},
|
| 311 |
+
{
|
| 312 |
+
"28": {
|
| 313 |
+
"title": "\u201cOlivocochlear efferents: Their action, effects, measurement and\nuses, and the impact of the new conception of cochlear mechanical\nresponses,\u201d",
|
| 314 |
+
"author": "John J Guinan Jr,",
|
| 315 |
+
"venue": "Hearing research, vol. 362, pp. 38\u201347, 2018.",
|
| 316 |
+
"url": null
|
| 317 |
+
}
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"29": {
|
| 321 |
+
"title": "\u201cThe competition between the noise and shear motion sensitivity of\ncochlear inner hair cell stereocilia,\u201d",
|
| 322 |
+
"author": "Aritra Sasmal and Karl Grosh,",
|
| 323 |
+
"venue": "Biophysical Journal, vol. 114, no. 2, pp. 474\u2013483, 2018.",
|
| 324 |
+
"url": null
|
| 325 |
+
}
|
| 326 |
+
},
|
| 327 |
+
{
|
| 328 |
+
"30": {
|
| 329 |
+
"title": "\u201cSpeech2spikes: Efficient audio encoding pipeline for real-time\nneuromorphic systems,\u201d",
|
| 330 |
+
"author": "Kenneth Michael Stewart, Timothy Shea, Noah Pacik-Nelson, Eric Gallo, and\nAndreea Danielescu,",
|
| 331 |
+
"venue": "in Proceedings of the 2023 Annual Neuro-Inspired Computational\nElements Conference, 2023, pp. 71\u201378.",
|
| 332 |
+
"url": null
|
| 333 |
+
}
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"31": {
|
| 337 |
+
"title": "\u201cSpeech Commands: A Dataset for Limited-Vocabulary Speech\nRecognition,\u201d",
|
| 338 |
+
"author": "P. Warden,",
|
| 339 |
+
"venue": "ArXiv e-prints, Apr. 2018.",
|
| 340 |
+
"url": null
|
| 341 |
+
}
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"32": {
|
| 345 |
+
"title": "\u201cVoxceleb: a large-scale speaker identification dataset,\u201d",
|
| 346 |
+
"author": "A. Nagrani, J. S. Chung, and A. Zisserman,",
|
| 347 |
+
"venue": "in INTERSPEECH, 2017.",
|
| 348 |
+
"url": null
|
| 349 |
+
}
|
| 350 |
+
},
|
| 351 |
+
{
|
| 352 |
+
"33": {
|
| 353 |
+
"title": "\u201cAssessment for automatic speech recognition: Ii. noisex-92: A\ndatabase and an experiment to study the effect of additive noise on speech\nrecognition systems,\u201d",
|
| 354 |
+
"author": "Andrew Varga and Herman JM Steeneken,",
|
| 355 |
+
"venue": "Speech communication, vol. 12, no. 3, pp. 247\u2013251, 1993.",
|
| 356 |
+
"url": null
|
| 357 |
+
}
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"34": {
|
| 361 |
+
"title": "\u201cThe third \u2018chime\u2019speech separation and recognition challenge:\nDataset, task and baselines,\u201d",
|
| 362 |
+
"author": "Jon Barker, Ricard Marxer, Emmanuel Vincent, and Shinji Watanabe,",
|
| 363 |
+
"venue": "in 2015 IEEE Workshop on Automatic Speech Recognition and\nUnderstanding (ASRU). IEEE, 2015, pp. 504\u2013511.",
|
| 364 |
+
"url": null
|
| 365 |
+
}
|
| 366 |
+
}
|
| 367 |
+
],
|
| 368 |
+
"url": "http://arxiv.org/html/2309.09469v2"
|
| 369 |
+
}
|
20240323/2309.09574v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2309.10062v2.json
ADDED
|
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models",
|
| 3 |
+
"abstract": "In this work, we introduce SMART-LLM, an innovative framework designed for embodied multi-robot task planning. SMART-LLM: Smart Multi-Agent Robot Task Planning using Large Language Models (LLMs), harnesses the power of LLMs to convert high-level task instructions provided as input into a multi-robot task plan. It accomplishes this by executing a series of stages, including task decomposition, coalition formation, and task allocation, all guided by programmatic LLM prompts within the few-shot prompting paradigm. We create a benchmark dataset designed for validating the multi-robot task planning problem, encompassing four distinct categories of high-level instructions that vary in task complexity. Our evaluation experiments span both simulation and real-world scenarios, demonstrating that the proposed model can achieve promising results for generating multi-robot task plans. The experimental videos, code, and datasets from the work can be found at https://sites.google.com/view/smart-llm/.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "In recent years, multi-robot systems have gained prominence in various applications, from housekeeping tasks [1 ###reference_b1###] to search and rescue missions [2 ###reference_b2###] and warehouse automation [3 ###reference_b3###]. These systems, composed of multiple autonomous robots, can greatly enhance efficiency, scalability, and adaptability in numerous tasks. Typically, these robot arrays exhibit heterogeneity in terms of types and skill levels among individual agents. Consequently, the overall system complexity is heightened, emphasizing the critical importance of skillful task allocation among these agents. Effective allocation of complex tasks among multiple agents involves several crucial steps, including task decomposition, assigning sub-tasks to suitable agents, and ensuring correct task sequencing [4 ###reference_b4###]. This proficiency requires access to external knowledge or domain-specific information about the task.\nTraditional multi-robot task planning often struggles with diverse tasks and complex environments [4 ###reference_b4###], relying on fixed algorithms.\nRelying on fixed algorithms complicates the process of transitioning from one task to another without substantial modifications to the code. These challenges intensify when tasks are described in natural language, as such descriptions can lack precision and completeness. Take, for instance, the task presented in Fig. 1 ###reference_###: \u201cClosing the laptop and watching TV in a dimly lit room\u201d. Notably, this task description does not explicitly mention turning off the lights before watching TV. Given the incomplete and ambiguous nature of the instruction, it is crucial to leverage extensive prior knowledge to interpret the task and aid in efficient task planning.\n###figure_1### Large language models (LLMs), such as GPT-4 [5 ###reference_b5###], GPT-3.5 [6 ###reference_b6###] and Llama2 [7 ###reference_b7###], have demonstrated remarkable capabilities in understanding natural language, logical reasoning, and generalization. This presents exciting opportunities for enhancing comprehension and planning in multi-robot systems. In this paper, we introduce SMART-LLM, an innovative mechanism for task assignment to embodied agents using LLMs. SMART-LLM provides LLMs with Python programming scripts that encapsulate intricate robot skills and environmental details, including object information. It also provides practical examples of task decomposition and allocation based on the robot\u2019s capabilities and the environment. Leveraging programming language structures, SMART-LLM taps into the vast dataset of internet code snippets and documentation available to LLMs. As illustrated in Fig. 1 ###reference_###, when dealing with a complex task, SMART-LLM divides the task into sub-tasks, each related to specific objects or actions. These sub-tasks are then combined and delegated to suitable robots with the necessary skills to perform them.\nThe main contributions of this work are three-fold:\nMulti-Robot Task Planning Framework for integrating task decomposition, coalition formation, and skill-based task assignment, by leveraging LLMs.\nBenchmark Dataset: A benchmark dataset designed for evaluating multi-agent task planning systems, covering a spectrum of tasks, ranging from elemental to complex ones in the AI2-THOR [8 ###reference_b8###] simulation platform.\nImplementation and Evaluation of the framework in both simulated and real-world settings, undergoing thorough testing across a wide array of tasks."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related Works",
|
| 15 |
+
"text": "Multi-Robot Task Planning. Multi-robot task planning is important in robotics, requiring effective coordination among robots. Typically, the process of multi-robot task planning encompasses four distinct phases: task decomposition, coalition formation, task allocation, and task execution [4 ###reference_b4###]. Task decomposition entails the subdivision of a given task into manageable sub-tasks. The decomposition methods can either be task-specific [9 ###reference_b9###] or necessitate copious amounts of data for generating policies [10 ###reference_b10###]. Task-specific decomposition methods cannot be generalized and gathering prior knowledge to decompose a diverse range of tasks can pose a significant challenge. A modern, intuitive strategy employs natural language to articulate tasks and utilizes pre-trained language models equipped with diverse domain knowledge to segment them into sub-tasks and predict their sequential order over time [11 ###reference_b11###, 12 ###reference_b12###]. Similarly, in SMART-LLM, we employ large-language models to deconstruct tasks into robot actions aligned with their skills, facilitating seamless execution by the robot.\nIn coalition formation and task allocation, efficiently assigning decomposed tasks to multiple agents is crucial for the effective completion of the given task. To this end, a plethora of methodologies have been employed, encompassing negotiation [13 ###reference_b13###], auctioning [14 ###reference_b14###], consensus-based strategies [15 ###reference_b15###] and reinforcement learning [16 ###reference_b16###].\nWhile these methods exhibit reliability, they are typically tailored and optimized according to specific end goals and applications. This necessitates additional effort when scaling them across different applications and optimizing them under varied constraints. In our approach, we take advantage of the inherent generalizability of LLMs. This enables tasks to scale seamlessly, allowing teams to be assigned in diverse configurations without imposing additional modifications to the constraints within the code.\nDegrees of automation, contingent on the number of task-planning steps a method can execute, have been conceptualized [4 ###reference_b4###]. Most methods predominantly fall into the first or second level of automation. The first level exclusively automates task execution [17 ###reference_b17###, 18 ###reference_b18###]. Meanwhile, the second level automates either task allocation and execution [19 ###reference_b19###, 20 ###reference_b20###]; or coalition formation and execution [21 ###reference_b21###]. The third level of automation encompasses coalition, allocation, and execution but does not involve task decomposition [22 ###reference_b22###, 23 ###reference_b23###]. In a pioneering stride towards the fourth level of automation [24 ###reference_b24###], a method adeptly manages all four facets of task planning using natural language prompts and Long Short Term Memory (LSTM). Existing methods in the literature often have shortcomings, such as not covering all task planning steps; requiring extensive task-specific demonstration data for model training [24 ###reference_b24###], which often lacks generalizability when faced with unseen tasks, or being limited to specific tasks. Our method stands out by efficiently performing all four task-planning steps and utilizing LLMs to generalize across various tasks through a few-shot training approach.\nLLMs for robotics. Large Language Models excel in generalization, commonsense reasoning [6 ###reference_b6###, 25 ###reference_b25###], and are increasingly sought after for inclusion in robotics systems [26 ###reference_b26###, 27 ###reference_b27###]. They play a vital role in crafting task plans for robots, making use of few or zero-shot learning methods [6 ###reference_b6###]. Various techniques for generating these robotic task plans using LLMs have emerged, encompassing value function-based approaches [27 ###reference_b27###, 28 ###reference_b28###] and context-driven prompts [29 ###reference_b29###, 30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###]. Moreover, LLMs have found utility in providing feedback and refining task plans to enhance robot performance [33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###].\nWhile LLMs excel at creating flexible task plans, they face challenges when applied to larger multi-agent teams. In the realm of multi-agent systems, progress has been made in enhancing agent cooperation with the use of LLMs [36 ###reference_b36###, 37 ###reference_b37###, 38 ###reference_b38###]. These approaches involve equipping individual agents with their own LLMs to improve interactions and boost their collaborative skills. However, these methods prioritize improving multi-agent system efficiency but do not tackle the specific task of creating task plans for multi-robot teams. These plans involve assigning and sequencing tasks for individual robots based on their skills and the environment\u2019s condition. Our approach focuses on task decomposition and allocation in a heterogeneous robot team, considering individual robot skills. We achieve multi-robot task planning without the need for separate LLMs per robot. This simplifies planning and provides a unified solution for multi-robot task coordination."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Problem Formulation",
|
| 21 |
+
"text": "Given a high-level language instruction , the goal of this work is to understand the instruction, compute the necessary steps for task completion, and formulate a task plan that enables its execution. Tasks are executed in a manner that maximizes the utilization of the available robots, by performing tasks in parallel when feasible. These tasks are performed in an environment that encapsulates numerous entities and objects. We assume that the given instruction can be successfully executed in the environment .\nTo execute the task, we have a set of heterogeneous embodied robot agents . Let be the set of all skills or actions that an agent may be capable of performing. In this work, we assume that robot skills, , are either pre-implemented in the system or that there are available API calls to execute these skills. Each of the agents possesses a diverse set of skills, that they can perform, each subject to specific constraints. Here, represents the list of skills of robot , and , for . For instance, for the robot skill PickUpObject, there may be constraints on the maximum mass that a robot can pick.\nNow, the instruction can be decomposed into a temporarily ordered set of sub-tasks, , based on the robot skills, , and the environment , where denotes the temporal order of a sub-task and . It is worth noting that some of the sub-tasks can be executed in parallel, having the same temporal precedence. Let be the list of skills needed by a robot to complete a sub-task, , where and . Based on , the sub-task can be allocated to a robot with skills if , where and . In cases where no single robot satisfies this constraint, a team of two or more robots is required to perform the sub-task. In such scenarios, we form a team of robots, , each possessing skills , such that .\n###figure_2###"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV Methodology",
|
| 27 |
+
"text": "The proposed approach utilizes LLMs to perform Task Decomposition, Coalition Formation, and Task Allocation within the context of multi-robot task planning. Our approach employs Pythonic prompts to guide the LLM in generating code for task decomposition and allocation. We opt for Pythonic prompts over natural language prompts because they facilitate the generation of executable code directly from the LLMs. Moreover, Pythonic prompts adhere to a structured syntax, enhancing the LLM\u2019s comprehension of the prompts [39 ###reference_b39###].\nWe provide concise prompt samples with line-by-line comments and block comments giving task summaries for each step, aiding the LLM in understanding and producing code effectively. The prompts were structured to mimic typical code, complete with comments to delineate sample tasks. Details regarding robot skills and object properties were encoded as Python dictionaries, providing a concise representation that the LLM could readily comprehend [39 ###reference_b39###] and also help reduce the token size. The comments were meticulously structured, incorporating detailed instructions on task execution and allocation requirements, enabling the LLM to comprehend and replicate the process for new tasks."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "IV-A Stage 1: Task Decomposition",
|
| 33 |
+
"text": "In this stage, we decompose the given instruction, , into a set of independent sub-tasks, , along with a sequence of actions for performing each sub-task. To decompose a task, we provide information about the environment, (including objects and other entities present in the environment), and a list of primitive skills, , that robots can perform. This information about the environment and the robot\u2019s skills is utilized to decompose the task such that it can be performed in that environment using the skills possessed by the robots.\nFollowing the initial few-shot LLM prompting, we provide the LLM with various pieces of information: details about the robot\u2019s skills, information about the environment, several examples of sample tasks, and corresponding Python code-based decomposed plans. The LLM takes all this information along with the input task, , that needs to be decomposed and generates the sub-tasks, . In the Stage 1 block of Fig. 2 ###reference_### corresponding to task decomposition, the purple box corresponds to the list of robot skills, ; the blue box corresponds to details about the environment, ; green box corresponds to the decomposed task samples given as part of the prompt; and red box corresponds to the given instruction . The red box in the Stage 2 block of Fig. 2 ###reference_### is the output from the LLM, corresponding to the sub-tasks, ."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.2",
|
| 37 |
+
"parent_section_id": "4",
|
| 38 |
+
"section_name": "IV-B Stage 2: Coalition Formation",
|
| 39 |
+
"text": "Coalition formation is used to form robot teams to perform each of the sub-tasks computed through task decomposition. In task decomposition, the primary task is broken down into sub-tasks, based on common sense and the various entities present in the environment, . However, this initial breakdown does not take into account the specific skills of individual robots, , or their capabilities to perform each sub-task. Therefore, in this stage, we prompt the LLM to analyze the list of skills needed to perform each sub-task, , and the skills of individual robots, to identify the suitable robot(s) for each sub-task. To achieve this, we prompt the LLM with samples of decomposed tasks and corresponding coalition formation policies that describe how available robots can be assigned to the sub-tasks.\nThe coalition policy consists of statements regarding whether robots possess all the necessary skills to perform a sub-task and how any skill gaps in a single robot\u2019s ability to perform a sub-task can be addressed by involving additional robots. The samples we include encompass various cases:\nIn scenarios, where a single robot possesses all the required skills to perform a sub-task, leading to a one-to-one assignment of robots to tasks.\nInstances where no single robot possesses all the skills needed for a sub-task, resulting in multiple robots collaborating on the same task.\nCases where a robot possesses the necessary skills for a sub-task but is constrained by certain limitations (for example, a robot with a maximum weight limit for a pick-up task). In such cases, additional robots are employed to overcome these constraints.\nBy presenting these samples along with the decomposed task, , and a list of available robots and their skills , the LLM generates a new coalition formation policy that outlines how the given robots can be assigned to perform the input task. The Stage 2 block of Fig. 2 ###reference_### corresponding to coalition formation, the green box represents the sample decomposed tasks given as part of the prompt; the blue box shows the available robots and their skills along with details about the environment ; the orange box delineates a general summary of the coalition policy, whereas in the experiments we utilize actual coalition policy for the sample decomposed tasks; and the red box is the decomposed task for which a coalition policy needs to be generated. The red box in the Stage 3 block of Fig. 2 ###reference_### is the output from the LLM, corresponding to coalition formation policy for the sub-tasks, and the instruction ."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.3",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "IV-C Stage 3: Task Allocation",
|
| 45 |
+
"text": "Task allocation involves the precise assignment of either a specific robot or a team of robots to individual sub-tasks, guided by the coalition formation policy established in the preceding phase. Similar to the previous stages, a prompt consisting of decomposed task samples, coalition formation policies, and allocated plans for those tasks is constructed. By incorporating the decomposed sub-tasks, , and the previously generated coalition formation policy for the given input task, , we instruct the LLM to distribute robots to each sub-task according to the coalitions and produce executable code. Depending on the coalition policy, a sub-task may be allocated to either a single robot or a group of robots.\nThe Stage 3 block in Fig. 2 ###reference_### shows sample decomposed plans (green box), the list of available robots and their skills (blue box), their coalition policies (orange box), and their allocated plans (violet box) used as part of the prompt, along with the coalition policy for input task (red box), to generate the final executable code in the Stage 4 block (red box)."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.4",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "IV-D Stage 4: Task Execution",
|
| 51 |
+
"text": "The LLM generates task plans for multi-robot teams through task allocation, which are then executed by an interpreter with either a virtual or physical team of robots. These plans are executed by making API calls to the robots\u2019 low-level skills, ensuring the efficient execution of the tasks. As shown in Stage 4 of Fig. 2 ###reference_###, the allocated task plan (red box) for the example task \u201cturn off the desk and floor light and watch TV\u201d is executed by a team of three robots in a certain temporal order. In this stage, the figure also displays the sequence of robot views as they perform the task along with captions indicating the ongoing task step. Captions marked in green correspond to specific actions completed by the robot."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Experiments",
|
| 57 |
+
"text": ""
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.1",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "Benchmark Dataset",
|
| 63 |
+
"text": "To evaluate the performance of SMART-LLM and facilitate a quantitative comparison with other baseline methods, we created a benchmark dataset tailored for the evaluation of natural language-based task planning in multi-robot scenarios. This dataset originates from environments and actions within AI2-THOR [8 ###reference_b8###], a deterministic simulation platform for typical household activities. The dataset encompasses 36 high-level instructions that articulate tasks and corresponding AI2-THOR floor plans, providing the spatial context for task execution. Given the multi-robot facet of our dataset, we include information on the number of robots available to perform a task and a comprehensive list of their respective skills. The number of available robots for each task ranges from 1 to 4, with varying individual skills, allowing for scalability evaluation of task planning methods.\nIn the dataset, we also include the final ground truth states for the tasks, capturing the definitive states of relevant objects and their conditions within the environment after task completion. This ground truth delineates a set of symbolic goal conditions crucial for achieving task success. It includes details such as the object\u2019s position in the environment and its conditions like heated, cooked, sliced, or washed after the task is correctly executed. In addition to the final ground truth states, we provide data on the number of transitions in robot utilization during task execution. Transitions occur when one group of robots completes their sub-tasks, allowing another group to take over. This quantifies the utilization of the multi-robot system. If tasks are not appropriately parallelized during experiments and robots are not fully utilized, sub-tasks may be performed sequentially rather than concurrently, resulting in more transitions in robot utilization compared to ground truth utilization.\nTo evaluate the performance of our proposed method across diverse task complexities, our dataset comprises four task categories:\nElemental Tasks are designed for a single robot. In these scenarios, a single robot is assumed to possess all the necessary skills and abilities, eliminating the need for coordination with multiple robots.\nSimple Tasks involve multiple objects and can be decomposed into sequential or parallel sub-tasks but not both concurrently. Again, all the robots possess all the necessary skills.\nCompound Tasks are similar to Simple Tasks, with flexibility in execution strategies (sequential, parallel, or hybrid). However, the robots are heterogeneous, possessing specialized skills and properties, allowing individual robots to handle sub-tasks that match their skills and properties.\nComplex Tasks are intended for heterogeneous robot teams and resemble Compound Tasks in their characteristics like task decomposition, multi-robot engagement, and the presence of multiple objects. Unlike Compound Tasks, individual robots cannot independently perform sub-tasks due to limitations in their skills or properties, necessitating strategic team assignments to leverage their combined capabilities for effective task completion.\nThe dataset comprises 6 tasks categorized as elemental tasks, 8 tasks as simple tasks, 14 tasks as compound tasks, and 8 tasks as complex tasks."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5.2",
|
| 67 |
+
"parent_section_id": "5",
|
| 68 |
+
"section_name": "Simulation Experiments",
|
| 69 |
+
"text": "Our method\u2019s validation takes place within the AI2-THOR simulated environment, where we employ our benchmark dataset for rigorous evaluation and comparative analysis against baseline approaches. Our experimental setup encompasses a varied set of example prompts, including 5 Pythonic plan examples for task decomposition, 3 for coalition formation, and 4 for task allocation. These example prompts cover tasks that can be parallelized using threading, tasks that can only be executed sequentially, and tasks that involve both parallel and sequential execution. This diverse range of examples is strategically tailored to mirror the inherent complexities present in distinct phases of multi-robot task planning.\nIt is worth noting that the example prompts were distinct from the tasks in the dataset and were based on different AI2-THOR floorplans not included in the dataset. Consequently, all the tasks in the dataset are considered unseen during testing. We evaluate SMART-LLM with various language models as its backbone. We employ GPT-4 [5 ###reference_b5###], GPT-3.5 [6 ###reference_b6###], Llama-2-70B [7 ###reference_b7###], and Claude-3-Opus [41 ###reference_b41###] to assess SMART-LLM\u2019s performance across diverse tasks and with various language models. We also compare our method to two alternative baselines. In the first baseline, we use our task decomposition method and prompts and randomly assign sub-tasks to available robots. The second baseline uses our task decomposition method along with a rule-based method for task allocation implemented based on [40 ###reference_b40###]. Both baseline methods utilize GPT-4 to perform the task decomposition."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.3",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "Real-Robot Experiments",
|
| 75 |
+
"text": "In our real experiments with mobile robots, we assess the efficacy of SMART-LLM in handling tasks such as addressing visibility coverage challenges [42 ###reference_b42###] and capturing images of objects. These tasks encompass diverse regions of varying sizes that necessitate visibility coverage and objects requiring image capture. Both aerial and ground robots, each with unique skill sets and visibility capabilities, are at our disposal for task execution. SMART-LLM is utilized to generate task plans according to these specific requirements. The number of robots required for achieving complete visibility coverage is contingent upon the size of the region and the capabilities of the robots involved. We presume that our robots are endowed with essential low-level skills, including GoToLocation, ClickPicture, and Patrol, essential for proficient task execution. To formulate task plans within this framework, we rely on the same prompt samples employed in our simulation experiments, which are grounded in the AI2-THOR simulator."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.4",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "Evaluation Metrics",
|
| 81 |
+
"text": "We employ five evaluation metrics: Success Rate (SR), Task Completion Rate (TCR), Goal Condition Recall (GCR), Robot Utilization (RU), and Executability (Exe), following the methodology of [29 ###reference_b29###]. Our evaluations are based on the dataset\u2019s final ground truth states, which we compare to the achieved states post-execution to assess task success.\nExe is the fraction of actions in the task plan that can be executed, regardless of their impact on task completion.\nRU evaluates the efficiency of the robot team by comparing the experiment\u2019s transition count to the dataset\u2019s ground truth transition count. RU equals when they match, when transitions equal sub-task count, and falls between and otherwise.\nGCR is quantified using the set difference between ground truth final state conditions and final state achieved, divided by the total number of task-specific goals in the dataset.\nTCR indicates task completion, irrespective of the robot utilization. If GCR = , then TCR = else .\nSR is success rate and is when both and are , else it is . The task is considered successful when completed with appropriate robot utilization."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "6",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "VI Results and Discussion",
|
| 87 |
+
"text": ""
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6.1",
|
| 91 |
+
"parent_section_id": "6",
|
| 92 |
+
"section_name": "VI-A Simulation Experiments",
|
| 93 |
+
"text": "Table I ###reference_### summarizes the average results across each category in the dataset for our method with various LLM backbones and baseline methods on unseen dataset tasks. Overall, SMART-LMM consistently delivers favorable outcomes irrespective of the LLM backbone employed. In the elemental task, SMART-LLM adeptly decomposed the given task and assigned the robot accordingly, except when employing the GPT-3.5 backbone, which encountered challenges in decomposing certain tasks. However, when accurate task decompositions were provided, the baseline method with random allocation performed successfully, given that that all robots possessed all the necessary skills.\nIn the simple tasks, the outcomes hinged on the LLM\u2019s capacity to decompose the given task in the appropriate sequence for execution. Notably, SMART-LLM utilizing Claude-3 as the backbone achieved superior results, although other LLMs also demonstrated commendable performance. GPT-4 and Claude-3 attain a perfect TCR score of but have a lower SR of and due to sequential execution instead of parallel execution by two robots, hence impacting RU. Random task allocation often faltered, whereas rule-based allocation succeeded when task decompositions from LLM followed a logical sequence, yielding identical results to those achieved using LLM for allocation.\nIn compound and complex tasks, our method consistently achieves favorable results across all LLM backbones, with a success rate of . We observed occasional struggles with task sequencing and robot team assignment in SMART-LLM, which may be mitigated by including additional prompt samples. However, the token limitations of certain LLMs hinder this optimization. Particularly, GPT-3.5 demonstrates underperformance compared to other LLM models, likely due to its deficiency in logical reasoning capabilities. Interestingly, Lllam 2, with only 70B parameters compared to trillions in other models, performs equally well. This success can be attributed to the prompting structure of SMART-LMM, enabling efficient performance even with smaller and simpler models. Consequently, SMART-LLM is deployable on local machines as well. Our decomposition method, employing random allocation, generally falters for skill-based task assignments due to its inability to consider the environment\u2019s state and the robot\u2019s skills. Rule-based allocation demonstrates satisfactory performance for compound tasks requiring the identification of robots with the appropriate skills. However, it falters in compound tasks involving object properties and complex tasks where team formation relies on specialized constraints. While these shortcomings could be mitigated by incorporating additional constraints into the code, this approach would require continual modifications or additions to accommodate new scenarios. Such practices compromise the scalability and ease of adaptation of the method. This underscores the scalability of SMART-LLM, as it does not necessitate any modifications for newer tasks, rendering the method highly scalable. Videos showing all the experiments can be accessed via https://youtu.be/TnyCKwgTm3U ###reference_youtu.be/TnyCKwgTm3U###.\nInfeasible Scenarios. In addition to the results presented in Table I ###reference_###, we conducted assessments involving more intricate tasks for which none of the robots possessed the required skills. This particular scenario is not included in Table I ###reference_### because no feasible code can be generated for the metrics to be measured. Notably, our approach utilizing the GPT-4 and Claude-3 backbones exhibited the capacity to discern this situation and refrained from generating any task allocation plan. In contrast, our method employing GPT-3.5 and Llama2 produced a task allocation plan involving robots ill-suited for the designated tasks. This disparity underscores the enhanced logical reasoning capabilities of GPT-4 and Claude-3 in recognizing and responding to such scenarios.\nVariability in Performance. The inherent non-deterministic characteristics of LLM introduce a degree of variability in its outcomes [43 ###reference_b43###]. To assess this variability, we conducted 5 separate runs, each on a randomly selected task from every category within our dataset. Table II ###reference_### provides the mean and standard deviations of the results observed across these trials for our approach using GPT-4 as the backbone. For elemental, simple, and complex tasks, our method consistently yielded comparable results. Nevertheless, in the case of complex scenarios, we encountered inconsistency, leading to occasional failures in robot task allocation.\nAblation Study. \nWe utilized a benchmark dataset to evaluate different variations of our method, examining the impact of comments (both line-by-line and task summaries) in Python prompts. We validated our method with prompts lacking such comments. Additionally, we studied the influence of the coalition formation stage by removing it and directly allocating tasks based on task decomposition output. Table III ###reference_### summarizes the ablations of our method using GPT-4 as the backbone. Removing comments generally reduces the success rate, underlining the value of natural language instructions with code. Notably, when comments are removed, task decomposition and allocation perform similarly across simple and elemental tasks but suffer in compound and complex tasks, indicating that comments aid in understanding reasoning and logical structures. The removal of coalition formation led to a decrease in the success rate. This decline was primarily attributed to the absence of detailed rational reasoning for task allocation. Without coalition formation, elemental tasks deviated the most, and the success rate dropped from to , as all task allocation samples involved scenarios requiring robot teaming, leading to unnecessary multi-robot allocation for elemental tasks."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "6.2",
|
| 97 |
+
"parent_section_id": "6",
|
| 98 |
+
"section_name": "VI-B Real-Robot Experiments",
|
| 99 |
+
"text": "In real-robot experiments, we first tested our method for coverage visibility tasks with regions of different areas and robots with different visibility areas. When tested across various tasks, our method correctly generated task plans and allocated an appropriate number of robots. Despite this task being completely unseen and there were no sample prompts involving properties such as visibility, our method executed it seamlessly using real robots, bridging the gap between simulation and real-world applications. In Fig. 3 ###reference_###, for the instruction \u201cpatrol the regions\u201d, one or more robots are assigned to regions based on their visibility, and they patrol those regions. Furthermore, we evaluated our approach in tasks involving navigation and capturing images of predetermined objects. Despite these skills being entirely unseen, SMART-LLM successfully generated plans in the correct sequence and captured images of the specified objects.\n###figure_3###"
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "7",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "VII Conclusions and Future Work",
|
| 105 |
+
"text": "In our research, we delve into the potential of LLMs in the realm of generating task plans for heterogeneous robot teams. Our approach introduces prompting techniques, tailored to enhance the efficiency of the four key stages of multi-robot task planning. Each prompt takes into account the attributes of the environment and the capabilities of the individual robots, to generate a task plan.\nOur experiments validate that the proposed method can handle task instructions of varying complexities. Notably, our approach exhibits remarkable adaptability, allowing it to seamlessly generalize to new and unexplored environments, robot types, and task scenarios. This method streamlines the transition from simulations to real-world robot applications, enabling task plan samples from simulations to be used for generating task plans for real robot systems. In the future, we aim to enhance our work by implementing dynamic task allocation among robots and exploring multi-agent LLM frameworks for task planning."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [],
|
| 109 |
+
"tables": {
|
| 110 |
+
"1": {
|
| 111 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:120%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T1.4.1.1\" style=\"font-size:75%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S5.T1.5.2\" style=\"font-size:75%;\">Evaluation of SMART-LLM and baselines in the AI2-THOR simulator for different categories of tasks in the benchmark dataset. </span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T1.6\" style=\"width:433.6pt;height:106.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-140.9pt,34.7pt) scale(0.606096269246407,0.606096269246407) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.6.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.1.1.1\" style=\"font-size:120%;\">Methods</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"5\" id=\"S5.T1.6.1.1.2\">\n<span class=\"ltx_text\" id=\"S5.T1.6.1.1.2.1\" style=\"font-size:120%;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.1.2.1.1\">Elemental</span></span><span class=\"ltx_text\" id=\"S5.T1.6.1.1.2.2\" style=\"font-size:120%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"5\" id=\"S5.T1.6.1.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.1.3.1\" style=\"font-size:120%;\">Simple</span><span class=\"ltx_text\" id=\"S5.T1.6.1.1.3.2\" style=\"font-size:120%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"5\" id=\"S5.T1.6.1.1.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.1.4.1\" style=\"font-size:120%;\">Compound</span><span class=\"ltx_text\" id=\"S5.T1.6.1.1.4.2\" style=\"font-size:120%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"5\" id=\"S5.T1.6.1.1.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.1.5.1\" style=\"font-size:120%;\">Complex</span><span class=\"ltx_text\" id=\"S5.T1.6.1.1.5.2\" style=\"font-size:120%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.1.1\" style=\"font-size:120%;\">SR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.2.1\" style=\"font-size:120%;\">TCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.3.1\" style=\"font-size:120%;\">GCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.4.1\" style=\"font-size:120%;\">RU</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.5.1\" style=\"font-size:120%;\">Exe</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.6.1\" style=\"font-size:120%;\">SR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.7.1\" style=\"font-size:120%;\">TCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.8.1\" style=\"font-size:120%;\">GCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.9.1\" style=\"font-size:120%;\">RU</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.10.1\" style=\"font-size:120%;\">Exe</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.11.1\" style=\"font-size:120%;\">SR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.12.1\" style=\"font-size:120%;\">TCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.13.1\" style=\"font-size:120%;\">GCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.14.1\" style=\"font-size:120%;\">RU</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.15\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.15.1\" style=\"font-size:120%;\">Exe</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.16\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.16.1\" style=\"font-size:120%;\">SR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.17\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.17.1\" style=\"font-size:120%;\">TCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.18\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.18.1\" style=\"font-size:120%;\">GCR</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.19\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.19.1\" style=\"font-size:120%;\">RU</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.2.20\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.2.20.1\" style=\"font-size:120%;\">Exe</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.6.1.3.1.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.3.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.3.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T1.6.1.3.1.1.1.1.1\" style=\"font-size:120%;\">SMART-LLM\u00a0(GPT-4)</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.2.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.3.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.4.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.5.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.6.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.7\"><span class=\"ltx_text\" id=\"S5.T1.6.1.3.7.1\" style=\"font-size:120%;\">0.62</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.8.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.9.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.10\"><span class=\"ltx_text\" id=\"S5.T1.6.1.3.10.1\" style=\"font-size:120%;\">0.62</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.11.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.12.1\" style=\"font-size:120%;\">0.69</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.13.1\" style=\"font-size:120%;\">0.76</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.14\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.14.1\" style=\"font-size:120%;\">0.85</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.15\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.15.1\" style=\"font-size:120%;\">0.92</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.16\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.16.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.17\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.17.1\" style=\"font-size:120%;\">0.71</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.18\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.18.1\" style=\"font-size:120%;\">0.85</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.19\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.19.1\" style=\"font-size:120%;\">0.92</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.20\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.20.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.1.3.21\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.3.21.1\" style=\"font-size:120%;\">0.97</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.6.1.4.1.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.4.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.1.1.1.1.1\" style=\"font-size:120%;\">SMART-LLM\u00a0(GPT-3.5)</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.2\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.2.1\" style=\"font-size:120%;\">0.83</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.3\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.3.1\" style=\"font-size:120%;\">0.83</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.4\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.4.1\" style=\"font-size:120%;\">0.83</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.4.5.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.6\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.6.1\" style=\"font-size:120%;\">0.91</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.7\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.7.1\" style=\"font-size:120%;\">0.62</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.8\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.8.1\" style=\"font-size:120%;\">0.87</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.9\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.9.1\" style=\"font-size:120%;\">0.93</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.10\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.10.1\" style=\"font-size:120%;\">0.62</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.11\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.11.1\" style=\"font-size:120%;\">0.95</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.12\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.12.1\" style=\"font-size:120%;\">0.42</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.13\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.13.1\" style=\"font-size:120%;\">0.50</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.14\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.14.1\" style=\"font-size:120%;\">0.61</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.15\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.15.1\" style=\"font-size:120%;\">0.71</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.16\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.16.1\" style=\"font-size:120%;\">0.85</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.17\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.17.1\" style=\"font-size:120%;\">0.14</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.18\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.18.1\" style=\"font-size:120%;\">0.28</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.19\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.19.1\" style=\"font-size:120%;\">0.35</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.20\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.20.1\" style=\"font-size:120%;\">0.85</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.4.21\"><span class=\"ltx_text\" id=\"S5.T1.6.1.4.21.1\" style=\"font-size:120%;\">0.62</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.6.1.5.1.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.5.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.1.1.1.1.1\" style=\"font-size:120%;\">SMART-LLM\u00a0(Llama2)</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.5.2.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.5.3.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.5.4.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.5.5.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.5.6.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.7\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.7.1\" style=\"font-size:120%;\">0.75</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.8\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.8.1\" style=\"font-size:120%;\">0.87</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.9\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.9.1\" style=\"font-size:120%;\">0.93</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.10\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.10.1\" style=\"font-size:120%;\">0.87</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.5.11.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.12\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.12.1\" style=\"font-size:120%;\">0.64</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.13\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.13.1\" style=\"font-size:120%;\">0.69</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.14\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.14.1\" style=\"font-size:120%;\">0.80</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.15\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.15.1\" style=\"font-size:120%;\">0.87</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.16\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.16.1\" style=\"font-size:120%;\">0.90</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.17\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.17.1\" style=\"font-size:120%;\">0.63</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.18\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.18.1\" style=\"font-size:120%;\">0.71</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.19\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.19.1\" style=\"font-size:120%;\">0.83</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.20\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.20.1\" style=\"font-size:120%;\">0.90</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.5.21\"><span class=\"ltx_text\" id=\"S5.T1.6.1.5.21.1\" style=\"font-size:120%;\">0.89</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.6.1.6.1.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.6.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T1.6.1.6.1.1.1.1.1\" style=\"font-size:120%;\">SMART-LLM\u00a0(Claude3)</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.2.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.3.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.4.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.5.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.6.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.7.1\" style=\"font-size:120%;\">0.87</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.8.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.9.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.10\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.10.1\" style=\"font-size:120%;\">0.93</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.11.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.12\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.12.1\" style=\"font-size:120%;\">0.69</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.13\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.13.1\" style=\"font-size:120%;\">0.76</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.14\"><span class=\"ltx_text\" id=\"S5.T1.6.1.6.14.1\" style=\"font-size:120%;\">0.81</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.15\"><span class=\"ltx_text\" id=\"S5.T1.6.1.6.15.1\" style=\"font-size:120%;\">0.87</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.16\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.16.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.17\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.6.17.1\" style=\"font-size:120%;\">0.71</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.18\"><span class=\"ltx_text\" id=\"S5.T1.6.1.6.18.1\" style=\"font-size:120%;\">0.71</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.19\"><span class=\"ltx_text\" id=\"S5.T1.6.1.6.19.1\" style=\"font-size:120%;\">0.87</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.20\"><span class=\"ltx_text\" id=\"S5.T1.6.1.6.20.1\" style=\"font-size:120%;\">0.97</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.6.21\"><span class=\"ltx_text\" id=\"S5.T1.6.1.6.21.1\" style=\"font-size:120%;\">0.92</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.6.1.7.1.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.7.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.1.1.1.1\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.1.1.1.1.1\" style=\"font-size:120%;\">Decomp\u00a0(ours) + Rand</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.7.2.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.7.3.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.7.4.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.7.5.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.7.6.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.7\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.7.1\" style=\"font-size:120%;\">0.37</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.8\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.8.1\" style=\"font-size:120%;\">0.62</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.9\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.9.1\" style=\"font-size:120%;\">0.62</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.10\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.10.1\" style=\"font-size:120%;\">0.37</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.11\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.11.1\" style=\"font-size:120%;\">0.60</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.12\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.12.1\" style=\"font-size:120%;\">0.08</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.13\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.13.1\" style=\"font-size:120%;\">0.16</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.14\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.14.1\" style=\"font-size:120%;\">0.25</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.15\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.15.1\" style=\"font-size:120%;\">0.41</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.16\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.16.1\" style=\"font-size:120%;\">0.37</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.17\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.17.1\" style=\"font-size:120%;\">0.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.18\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.18.1\" style=\"font-size:120%;\">0.00</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.19\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.19.1\" style=\"font-size:120%;\">0.15</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.20\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.20.1\" style=\"font-size:120%;\">0.85</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.7.21\"><span class=\"ltx_text\" id=\"S5.T1.6.1.7.21.1\" style=\"font-size:120%;\">0.38</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.6.1.8.1.1\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.1.8.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.1.8.1.1.1.1\">\n<span class=\"ltx_text\" id=\"S5.T1.6.1.8.1.1.1.1.1\" style=\"font-size:120%;\">Decomp\u00a0(ours) + Rule</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.1.1.1.1.2.1\" style=\"font-size:120%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.10062v2#bib.bib40\" title=\"\">40</a><span class=\"ltx_text\" id=\"S5.T1.6.1.8.1.1.1.1.3.2\" style=\"font-size:120%;\">]</span></cite>\n</td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.8.2.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.8.3.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.8.4.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.8.5.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.8.6.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.7\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.7.1\" style=\"font-size:120%;\">0.62</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.8.8.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.8.9.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.10\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.10.1\" style=\"font-size:120%;\">0.62</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.11\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.1.8.11.1\" style=\"font-size:120%;\">1.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.12\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.12.1\" style=\"font-size:120%;\">0.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.13\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.13.1\" style=\"font-size:120%;\">0.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.14\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.14.1\" style=\"font-size:120%;\">0.65</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.15\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.15.1\" style=\"font-size:120%;\">0.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.16\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.16.1\" style=\"font-size:120%;\">0.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.17\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.17.1\" style=\"font-size:120%;\">0.14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.18\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.18.1\" style=\"font-size:120%;\">0.14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.19\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.19.1\" style=\"font-size:120%;\">0.35</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.20\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.20.1\" style=\"font-size:120%;\">0.85</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.1.8.21\"><span class=\"ltx_text\" id=\"S5.T1.6.1.8.21.1\" style=\"font-size:120%;\">0.54</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 112 |
+
"capture": "TABLE I: Evaluation of SMART-LLM and baselines in the AI2-THOR simulator for different categories of tasks in the benchmark dataset. "
|
| 113 |
+
},
|
| 114 |
+
"2": {
|
| 115 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:120%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T2.24.1.1\" style=\"font-size:75%;\">TABLE II</span>: </span><span class=\"ltx_text\" id=\"S6.T2.25.2\" style=\"font-size:75%;\">Variability in performance.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S6.T2.20\" style=\"width:433.6pt;height:121.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(56.0pt,-15.7pt) scale(1.34803783328368,1.34803783328368) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T2.20.20\">\n<tr class=\"ltx_tr\" id=\"S6.T2.20.20.21\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.20.20.21.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.20.20.21.1.1\" style=\"font-size:120%;\">Method</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T2.20.20.21.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.20.20.21.2.1\" style=\"font-size:120%;\">SR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.20.20.21.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.20.20.21.3.1\" style=\"font-size:120%;\">TCR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.20.20.21.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.20.20.21.4.1\" style=\"font-size:120%;\">GCR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.20.20.21.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.20.20.21.5.1\" style=\"font-size:120%;\">RU</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.20.20.21.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T2.20.20.21.6.1\" style=\"font-size:120%;\">Exe</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.5.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.5.5.5.6\"><span class=\"ltx_text\" id=\"S6.T2.5.5.5.6.1\" style=\"font-size:120%;\">Elemental</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T2.1.1.1.1\">\n<span class=\"ltx_text\" id=\"S6.T2.1.1.1.1.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.1.1.1.1.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.2.2.2.2\">\n<span class=\"ltx_text\" id=\"S6.T2.2.2.2.2.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.2.2.2.2.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.3.3.3.3\">\n<span class=\"ltx_text\" id=\"S6.T2.3.3.3.3.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.3.3.3.3.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.4.4.4.4\">\n<span class=\"ltx_text\" id=\"S6.T2.4.4.4.4.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.4.4.4.4.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.5.5.5.5\">\n<span class=\"ltx_text\" id=\"S6.T2.5.5.5.5.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.5.5.5.5.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.10.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.10.10.10.6\"><span class=\"ltx_text\" id=\"S6.T2.10.10.10.6.1\" style=\"font-size:120%;\">Simple</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T2.6.6.6.1\">\n<span class=\"ltx_text\" id=\"S6.T2.6.6.6.1.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.6.6.6.1.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.7.7.7.2\">\n<span class=\"ltx_text\" id=\"S6.T2.7.7.7.2.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.7.7.7.2.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.8.8.8.3\">\n<span class=\"ltx_text\" id=\"S6.T2.8.8.8.3.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.8.8.8.3.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.9.9.9.4\">\n<span class=\"ltx_text\" id=\"S6.T2.9.9.9.4.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.9.9.9.4.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.10.10.10.5\">\n<span class=\"ltx_text\" id=\"S6.T2.10.10.10.5.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.10.10.10.5.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.15.15.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.15.15.15.6\"><span class=\"ltx_text\" id=\"S6.T2.15.15.15.6.1\" style=\"font-size:120%;\">Compound</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T2.11.11.11.1\">\n<span class=\"ltx_text\" id=\"S6.T2.11.11.11.1.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.11.11.11.1.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.12.12.12.2\">\n<span class=\"ltx_text\" id=\"S6.T2.12.12.12.2.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.12.12.12.2.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.13.13.13.3\">\n<span class=\"ltx_text\" id=\"S6.T2.13.13.13.3.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.13.13.13.3.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.14.14.14.4\">\n<span class=\"ltx_text\" id=\"S6.T2.14.14.14.4.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.14.14.14.4.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.15.15.15.5\">\n<span class=\"ltx_text\" id=\"S6.T2.15.15.15.5.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.15.15.15.5.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T2.20.20.20\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.20.20.20.6\"><span class=\"ltx_text\" id=\"S6.T2.20.20.20.6.1\" style=\"font-size:120%;\">Complex</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T2.16.16.16.1\">\n<span class=\"ltx_text\" id=\"S6.T2.16.16.16.1.1\" style=\"font-size:120%;\">0.48</span><span class=\"ltx_text\" id=\"S6.T2.16.16.16.1.2\" style=\"font-size:120%;\">0.40</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.17.17.17.2\">\n<span class=\"ltx_text\" id=\"S6.T2.17.17.17.2.1\" style=\"font-size:120%;\">0.48</span><span class=\"ltx_text\" id=\"S6.T2.17.17.17.2.2\" style=\"font-size:120%;\">0.40</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.18.18.18.3\">\n<span class=\"ltx_text\" id=\"S6.T2.18.18.18.3.1\" style=\"font-size:120%;\">0.73</span><span class=\"ltx_text\" id=\"S6.T2.18.18.18.3.2\" style=\"font-size:120%;\">0.22</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.19.19.19.4\">\n<span class=\"ltx_text\" id=\"S6.T2.19.19.19.4.1\" style=\"font-size:120%;\">1.00</span><span class=\"ltx_text\" id=\"S6.T2.19.19.19.4.2\" style=\"font-size:120%;\">0.00</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T2.20.20.20.5\">\n<span class=\"ltx_text\" id=\"S6.T2.20.20.20.5.1\" style=\"font-size:120%;\">0.81</span><span class=\"ltx_text\" id=\"S6.T2.20.20.20.5.2\" style=\"font-size:120%;\">0.15</span>\n</td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 116 |
+
"capture": "TABLE II: Variability in performance."
|
| 117 |
+
},
|
| 118 |
+
"3": {
|
| 119 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S6.T3\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:70%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S6.T3.4.1.1\" style=\"font-size:129%;\">TABLE III</span>: </span><span class=\"ltx_text\" id=\"S6.T3.5.2\" style=\"font-size:129%;\">Ablation studies.</span></figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S6.T3.6\" style=\"width:433.6pt;height:340.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(148.0pt,-116.0pt) scale(3.1486686389713,3.1486686389713) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S6.T3.6.1\">\n<tr class=\"ltx_tr\" id=\"S6.T3.6.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.6.1.1.1.1\" style=\"font-size:70%;\">Method</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T3.6.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.6.1.1.2.1\" style=\"font-size:70%;\">SR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.6.1.1.3.1\" style=\"font-size:70%;\">TCR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.6.1.1.4.1\" style=\"font-size:70%;\">GCR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.6.1.1.5.1\" style=\"font-size:70%;\">RU</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S6.T3.6.1.1.6.1\" style=\"font-size:70%;\">Exe</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.6.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.2.1\"><span class=\"ltx_text\" id=\"S6.T3.6.1.2.1.1\" style=\"font-size:70%;\">Ours</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T3.6.1.2.2\"><span class=\"ltx_text\" id=\"S6.T3.6.1.2.2.1\" style=\"font-size:70%;\">0.75</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.2.3\"><span class=\"ltx_text\" id=\"S6.T3.6.1.2.3.1\" style=\"font-size:70%;\">0.90</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.2.4\"><span class=\"ltx_text\" id=\"S6.T3.6.1.2.4.1\" style=\"font-size:70%;\">0.94</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.2.5\"><span class=\"ltx_text\" id=\"S6.T3.6.1.2.5.1\" style=\"font-size:70%;\">0.88</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.2.6\"><span class=\"ltx_text\" id=\"S6.T3.6.1.2.6.1\" style=\"font-size:70%;\">0.99</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.6.1.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.3.1\"><span class=\"ltx_text\" id=\"S6.T3.6.1.3.1.1\" style=\"font-size:70%;\">No Comments</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T3.6.1.3.2\"><span class=\"ltx_text\" id=\"S6.T3.6.1.3.2.1\" style=\"font-size:70%;\">0.48</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.3.3\"><span class=\"ltx_text\" id=\"S6.T3.6.1.3.3.1\" style=\"font-size:70%;\">0.65</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.3.4\"><span class=\"ltx_text\" id=\"S6.T3.6.1.3.4.1\" style=\"font-size:70%;\">0.73</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.3.5\"><span class=\"ltx_text\" id=\"S6.T3.6.1.3.5.1\" style=\"font-size:70%;\">0.75</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.3.6\"><span class=\"ltx_text\" id=\"S6.T3.6.1.3.6.1\" style=\"font-size:70%;\">0.78</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.6.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.4.1\"><span class=\"ltx_text\" id=\"S6.T3.6.1.4.1.1\" style=\"font-size:70%;\">No Summary</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T3.6.1.4.2\"><span class=\"ltx_text\" id=\"S6.T3.6.1.4.2.1\" style=\"font-size:70%;\">0.61</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.4.3\"><span class=\"ltx_text\" id=\"S6.T3.6.1.4.3.1\" style=\"font-size:70%;\">0.74</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.4.4\"><span class=\"ltx_text\" id=\"S6.T3.6.1.4.4.1\" style=\"font-size:70%;\">0.80</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.4.5\"><span class=\"ltx_text\" id=\"S6.T3.6.1.4.5.1\" style=\"font-size:70%;\">0.78</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.4.6\"><span class=\"ltx_text\" id=\"S6.T3.6.1.4.6.1\" style=\"font-size:70%;\">0.81</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.6.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.5.1\"><span class=\"ltx_text\" id=\"S6.T3.6.1.5.1.1\" style=\"font-size:70%;\">No Comm. & Summ.</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T3.6.1.5.2\"><span class=\"ltx_text\" id=\"S6.T3.6.1.5.2.1\" style=\"font-size:70%;\">0.41</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.5.3\"><span class=\"ltx_text\" id=\"S6.T3.6.1.5.3.1\" style=\"font-size:70%;\">0.61</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.5.4\"><span class=\"ltx_text\" id=\"S6.T3.6.1.5.4.1\" style=\"font-size:70%;\">0.66</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.5.5\"><span class=\"ltx_text\" id=\"S6.T3.6.1.5.5.1\" style=\"font-size:70%;\">0.59</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.5.6\"><span class=\"ltx_text\" id=\"S6.T3.6.1.5.6.1\" style=\"font-size:70%;\">0.69</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T3.6.1.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.6.1\"><span class=\"ltx_text\" id=\"S6.T3.6.1.6.1.1\" style=\"font-size:70%;\">No Coalition</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S6.T3.6.1.6.2\"><span class=\"ltx_text\" id=\"S6.T3.6.1.6.2.1\" style=\"font-size:70%;\">0.60</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.6.3\"><span class=\"ltx_text\" id=\"S6.T3.6.1.6.3.1\" style=\"font-size:70%;\">0.68</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.6.4\"><span class=\"ltx_text\" id=\"S6.T3.6.1.6.4.1\" style=\"font-size:70%;\">0.75</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.6.5\"><span class=\"ltx_text\" id=\"S6.T3.6.1.6.5.1\" style=\"font-size:70%;\">0.85</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S6.T3.6.1.6.6\"><span class=\"ltx_text\" id=\"S6.T3.6.1.6.6.1\" style=\"font-size:70%;\">0.82</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
|
| 120 |
+
"capture": "TABLE III: Ablation studies."
|
| 121 |
+
}
|
| 122 |
+
},
|
| 123 |
+
"image_paths": {
|
| 124 |
+
"1": {
|
| 125 |
+
"figure_path": "2309.10062v2_figure_1.png",
|
| 126 |
+
"caption": "Figure 1: An overview of SMART-LLM: Smart Multi-Agent Robot Task planning using Large Language Models (LLM). Given a high-level instruction, SMART-LLM decomposes the instruction into sub-tasks assigning them to individual robots based on their specific skills and capabilities, and orchestrating their execution in a coherent and logical sequence.",
|
| 127 |
+
"url": "http://arxiv.org/html/2309.10062v2/x1.png"
|
| 128 |
+
},
|
| 129 |
+
"2": {
|
| 130 |
+
"figure_path": "2309.10062v2_figure_2.png",
|
| 131 |
+
"caption": "Figure 2: System overview: SMART-LLM consists of four key stages: i) Task Decomposition: a prompt consisting of robot skills, objects, and task decomposition samples is combined with the input instruction. This is then fed to the LLM model to decompose the input task; ii) Coalition Formation: a prompt consisting of a list of robots, objects available in the environment, sample decomposed task examples along with corresponding coalition policy describing the formation of robot teams for those tasks, and decomposed task plan for the input task from the previous stage, is given to the LLM, to generate a coalition policy for the input task; iii) Task Allocation: a prompt consisting of sample decomposed tasks, their coalition policy and allocated task plans based on the coalition policy is given to the LLM, along with coalition policy generated for the input task. The LLM then outputs an allocated task plan based on this information; and iv) Task Execution: based on the allocated code generated, the robot executes the tasks. \u201c\u2026\u201d is used for brevity.",
|
| 132 |
+
"url": "http://arxiv.org/html/2309.10062v2/x2.png"
|
| 133 |
+
},
|
| 134 |
+
"3": {
|
| 135 |
+
"figure_path": "2309.10062v2_figure_3.png",
|
| 136 |
+
"caption": "Figure 3: Real-robot experiment: a) team of robots and the regions to be patrolled; b) robots after task planning and patrolling their respective regions allocated based on visibility area.",
|
| 137 |
+
"url": "http://arxiv.org/html/2309.10062v2/x3.png"
|
| 138 |
+
}
|
| 139 |
+
},
|
| 140 |
+
"validation": true,
|
| 141 |
+
"references": [],
|
| 142 |
+
"url": "http://arxiv.org/html/2309.10062v2"
|
| 143 |
+
}
|
20240323/2309.14552v2.json
ADDED
|
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Tactile Estimation of Extrinsic Contact Patch for Stable Placement",
|
| 3 |
+
"abstract": "Precise perception of contact interactions is essential for fine-grained manipulation skills for robots.\nIn this paper, we present the design of feedback skills for robots that must learn to stack complex-shaped objects on top of each other (see Fig. Tactile Estimation of Extrinsic Contact Patch for Stable Placement).\nTo design such a system, a robot should be able to reason about the stability of placement from very gentle contact interactions. Our results demonstrate that it is possible to infer the stability of object placement based on tactile readings during contact formation between the object and its environment. In particular, we estimate the contact patch between a grasped object and its environment using force and tactile observations to estimate the stability of the object during a contact formation. The contact patch could be used to estimate the stability of the object upon release of the grasp. The proposed method is demonstrated in various pairs of objects that are used in a very popular board game.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Humans can perform very complex and precise manipulation tasks effortlessly.\nConsider, for example, gently stacking two lightweight objects on top of each other without looking at them, as shown in Fig. Tactile Estimation of Extrinsic Contact Patch for Stable Placement.\nSuccessful execution of this task requires the object not to fall upon release of the grasp.\nIn these scenarios, stability is not directly observable; it must be implicitly inferred from tactile signals that entangle both intrinsic (direct) contact between the end effector and the grasped object and extrinsic (indirect) contact between the grasped object and the environment.\nFor example, in Fig. Tactile Estimation of Extrinsic Contact Patch for Stable Placement, it is difficult to distinguish the stability of the configuration on the left from the right by looking at it visually.\nThis work is motivated by how humans can disentangle a composite tactile signal to determine the nature of extrinsic contact; and can further predict whether a given stack configuration is stable.\nWe present a closed-loop system that similarly reasons about object stability using tactile signals that arise out of extrinsic contacts.\nThe stability of the object could be estimated from the contact forces experienced by an object during placement. The stability of an object is governed by the relative location of the environmental contact and the center of mass location of the object. The forces observed by the force-torque (F/T) sensor mounted on the wrist of the robot, as well as the deformation observed by the tactile sensors co-located at the gripper fingers, depend on the contact patch between the object and its environment, as well as the geometric and physical properties of the object. As a simplification, we assume that the geometry of the objects is fixed, so the robot works with known pieces. Under this assumption, the problem of estimating the stability of placement from tactile observations is simplified. With this understanding, we try to estimate the contact patch between the object and the environment using tactile signals. However, estimating contact patches from a single tactile observation is a partially observable problem. Thus, a perfect estimate of the contact from a single interaction is impossible.\nTo solve the partial observability problem, we present a method for aggregating information from multiple observations. The proposed method collects tactile observations by interacting with the environment multiple times and updates its belief in the underlying contact formation. We show that we can monotonically improve our estimate of the contact formation between the environment and the grasped object. This estimate is used to move the object towards a stable configuration so that it can be released in a stable pose. This is demonstrated using several pairs of objects from a popular board game where the objective is to incorporate a new block on an existing tower without destabilizing it. We also perform ablations to understand which sensing modality, the F/T sensor or the vision-based tactile sensor is helpful in understanding the phenomena during the considered contact phenomena.\nContributions: In summary, our contributions are the following.\nWe present a method for estimating extrinsic contact patches from end-effector tactile signals that compose both intrinsic and extrinsic contacts.\nOur probabilistic filtering approach for use in a feedback control loop can stably stack a set of extremely challenging real-world objects using solely tactile sensing.\n###figure_1###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related Work",
|
| 15 |
+
"text": "Block stacking.\nBlock stacking is one of the most widely studied problems in robotics. Several studies have addressed the problem of robot stacking through various approaches. These include learning to schedule auxiliary tasks for reinforcement learning (RL) [1 ###reference_b1###], combining demonstrations and RL [2 ###reference_b2###, 3 ###reference_b3###], employing sim-to-real transfer [2 ###reference_b2###, 4 ###reference_b4###, 5 ###reference_b5###], and using task-and-motion planning [6 ###reference_b6###]. The focus of these works primarily revolves around stacking simple cubes.\nLee et al. [7 ###reference_b7###] propose a benchmark that introduces relatively irregular rectangles generated by deforming cubes. However, these objects still maintain convexity and simplicity. Furrer et al. [8 ###reference_b8###] and Liu et al. [9 ###reference_b9###] have explored the stacking of irregular stones. Another related work that discusses vision-based contact support could be found in [10 ###reference_b10###], however, this assumed access to the geometry of the object and was indeed reasoning about the relative placement between blocks given the object geometries. Nevertheless, these studies make assumptions regarding knowledge of geometry and assume that objects possess wide support and high friction, simplifying the problem and enabling basic pick-and-place strategies. Most importantly, these works do not reason about stability using contact information but rather perform placement using open-loop controllers. These pick-and-place stackings would not work if there is ambiguity in the location of the environment (for example, the scenario shown in Fig. Tactile Estimation of Extrinsic Contact Patch for Stable Placement).\nTo address this problem, our proposed method considers the local contact phenomenon in which the object can topple and fall if it is not placed with the proper support. Moreover, we remove assumptions regarding the geometry of the underlying objects, necessitating the estimation of stability through interactions.\nExternal contact localization\nPrior works represent contacts as a set of points [11 ###reference_b11###, 12 ###reference_b12###] and lines [13 ###reference_b13###, 14 ###reference_b14###]. Although line contacts give us more information compared to point contacts, they require active exploration involving changes in gripper orientation [13 ###reference_b13###, 14 ###reference_b14###], making it difficult to apply them in our setting where the tower is very unstable.\nThe closest work to ours is the neural contact fields (NCF) of Higuera et al. [15 ###reference_b15###], where the authors estimate the contact patch between a grasped object and its environment. While NCF is evaluated on a simulation and a limited number of objects, we tested our method on unknown geometries of the environment, which can be used for an appropriate downstream task in a real system."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Problem Statement",
|
| 21 |
+
"text": "We are interested in performing a stable placement in environments where the object might have partial support for placement. Consider, for example, the scenario shown in Fig. Tactile Estimation of Extrinsic Contact Patch for Stable Placement, where it is not enough to establish contact with the bottom piece but rather to estimate the object\u2019s stability in the resulting contact formation. Thus, we consider the problem of estimating the stability of an object when in contact with its environment in an attempt to release and place the object in a stable pose during a task. This is a partially observable task, as we cannot observe the full state of the system, and thus, stability needs to be estimated from sensor observations. We assume that the robot has access to tactile sensors co-located at the gripper fingers and a Force/Torque (F/T) sensor at the wrist. A certain contact formation is stable if the object can remain stable after being released from the grasp.\nThe stability of a contact formation depends on the relative position of the center of mass of the object and the contact patch between the object and the environment.\nHowever, this cannot be directly observed during a contact formation, and thus leads to partial observability.\nA robot can usually observe force-torque signals and/or tactile images during interaction. The observed signals depend not only on the contact formation but also on the geometry and physical parameters of the grasped object. Thus,\nalthough these data have a lot of information, these are all entangled, and thus it is very difficult to extract specific information, e.g., estimate contact patch. The stability estimation problem in its full scope requires reasoning about the sensor observations while considering the geometric information of the objects.\nTo simplify the estimation problem, we make the following assumptions to limit the scope of the current study.\nGeometry and physical parameters of the grasped objects are fixed.\nAll objects are rigid and have flat surfaces.\nIt is important to emphasize that the robot is unfamiliar with the shape of the underlying objects and needs to explore a stable configuration through several probing attempts. These assumptions restrict the use of our proposed objects to known objects. A full and in-depth study of the problem is left as a future exercise."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV Method",
|
| 27 |
+
"text": "This work addresses the primary challenge of estimating stability during placing irregular objects. Since the contact formation between a grasped object and the environment generates sensor observations, we estimate the contact patch between them from force and tactile measurements.\nWe propose a framework consisting of four key components.\nFirst, the robot estimates the contact patch between the grasped object and its environment from an observation obtained by interacting with the environment.\nThen, it assesses stability based on the estimated contact patch; and releases the grasped object if it believes the current configuration is stable; otherwise, it aggregates information from multiple estimated contact patches to predict a belief map, which gives us a sense of the contact surface of the environment. Finally, the robot selects an action that moves the grasped object to a position that it believes can improve stability. In this section, we describe these four modules in more detail."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "4.1",
|
| 31 |
+
"parent_section_id": "4",
|
| 32 |
+
"section_name": "IV-A Contact Patch Estimation",
|
| 33 |
+
"text": "Given the observed tactile image and F/T measurements , our objective is to learn a model that generates a probabilistic contact patch , which consists of a set of probabilities indicating which part of the grasped object is in contact.\nContact representation.\nTo estimate the contact patch, we discretize the contact surface of the grasped object into points as each of which corresponds to a specific location on the contact surface of the grasped object (see Fig. 3 ###reference_### right). For each point , we predict the probability of being in contact or remaining uncontacted . Consequently, we represent the probabilistic contact patch as a set of probabilities .\nData collection by interaction.\nDuring a duration of seconds, the robot applies a downward force along the negative Z axis for mm, while collecting from tactile and force-torque sensors at a frequency of Hz. Specifically, , where with , where we use two tactile sensors mounted on each finger and measure marker displacements on the axis in the tactile image, and is the number of markers in column and row (see Fig. 2 ###reference_###), which can be obtained by post-processing the tactile image . Similarly, , is the F/T measurement. We use a suitable impedance control to prevent the object from falling by using excessive force.\nIn the data collection process, we add displacements in the plane such as and whose origin is the center position of the contact surface of the lower object (see Fig. 3 ###reference_###), and the minimum and maximum ranges are defined to ensure contact between the flat surfaces of the upper and lower objects. We use known geometries and displacements to generate ground-truth contact patches for training a model.\nTraining.\nFinally, we train a contact patch estimation model that takes observation and learns to generate a probabilistic contact surface as:\nThis model is trained by minimizing the binary cross-entropy loss for each data point . We use LSTM [16 ###reference_b16###] with two layers, each having units, to build the model to capture patterns in time-series data.\n###figure_2###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4.2",
|
| 37 |
+
"parent_section_id": "4",
|
| 38 |
+
"section_name": "IV-B Stability Estimation",
|
| 39 |
+
"text": "We utilize the estimated contact patch to estimate the stability of the current configuration. To do that, we first construct a convex hull (see Fig. 2 ###reference_### (b)) using points whose associated probability exceeds a predefined threshold denoted by , which we use for our experiments. Subsequently, we check that the convex hull includes the position of the center of mass of the grasped object. In the affirmative case, the gripper releases the grasped object. Otherwise, the gripper aggregates information and moves towards a stable position by an action selection strategy described in the following sections."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.3",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "IV-C Aggregating Information from Multiple Interactions",
|
| 45 |
+
"text": "Since the estimation of the contact patch from tactile signals is a partially observable task, that is, multiple different contact patches can yield similar tactile signals, it is difficult to reliably estimate the contact patch from a single interaction. Therefore, we aggregate information from multiple interactions to disambiguate the estimate.\nWe denote the aggregated contact patch at the time step as , again representing a probabilistic contact surface of the bottom object , where is the number of discrete points. Following Ota et al. [17 ###reference_b17###], the probabilistic formulation of the contact (note we remove lowercase for simplification) given past observation and action can be formulated as\nwhere the first term is as we assume deterministic dynamics, and the second term is initialized with the prior distribution and can be obtained through recursion.\nThe posterior can be computed as:\nwhere the first term is given by the contact patch estimation model and the second term can be computed from Eq.(2 ###reference_###). Specifically, we initialize the probability with since we do not know whether the specific point is in contact or not before interaction."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.4",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "IV-D Action Selection",
|
| 51 |
+
"text": "To realize a stable configuration, we design a policy that maximizes the contact surface area in the next step. The policy begins by calculating the central position of the convex hull of the aggregated contact patch , where is again the convex hull of the aggregated contact map, and subsequently directs the robot to navigate in the direction to this central position from the current position. Furthermore, to mitigate large movement at each step, we restrict movement within mm if the norm exceeds . We specifically set mm."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "5",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Experiments",
|
| 57 |
+
"text": ""
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "5.1",
|
| 61 |
+
"parent_section_id": "5",
|
| 62 |
+
"section_name": "Settings",
|
| 63 |
+
"text": "Tactile sensor.\nWe use a commercially available GelSight Mini tactile sensor [18 ###reference_b18###], which provides 320\u00d7240 compressed RGB images at a rate of approximately 25 Hz, with a field of view of 18.6 \u00d7 14.3 millimeters. We use gels that have 63 tracking markers.\nRobot platform.\nThe MELFA RV-5AS-D Assista robot, a collaborative robot with 6 DoF, is used in this study. The tactile sensor is mounted on the WSG-32 gripper (see Fig. 2 ###reference_###). We use a Force-Torque (F/T) sensor which is mounted on the wrist of the robot and used two-fold. First, we collect force observations that are used as input to the contact patch estimation model . Second, the stiffness control of the position-controlled robot.\nBandu.\nWe use pieces from Bandu for our experiment. Bandu is a toy game that involves stacking objects onto a base plate. Players take turns stacking these objects and compete to see who can stack the most. Each piece has a highly irregular shape, which requires robots to estimate stable placements based on the shape of the objects. Figure 4 ###reference_### illustrates the Bandu pieces used in our experiments. The challenge in the game is to accommodate an irregular piece in an existing tower without destabilizing it.\n###figure_3### ###figure_4###"
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "5.2",
|
| 67 |
+
"parent_section_id": "5",
|
| 68 |
+
"section_name": "Data Collection",
|
| 69 |
+
"text": "Settings.\nWe first show the distribution of the observed tactile signals to understand the difficulties of the task. We collect tactile signals for each pair of top-bottom objects to train the contact patch estimation model by interacting with three objects on the 3D printed board as shown in Fig. 4 ###reference_### (a), resulting in training samples.\nDuring data collection, we add random displacements on the axis as defined in Fig. 3 ###reference_###, and let the robot go down for mm after establishing contact with the bottom object for seconds using the stiffness controller whose gain parameter is [N/mm]. We use the grasping force of [N].\nResults and Analysis.\nFigure 5 ###reference_### shows the data distribution (left) and example contact patches (right). From the first to the fourth columns, we can observe the inherent difficulties of the estimation task. In many cases, we do not observe any symmetric distribution of , and the moment measurements about or . This could possibly be attributed to the inaccuracy in the 3D printing of the board or the slip of the object in the grasp during the contact interaction.\nFig. 5 ###reference_### (b) shows three contact patches sampled from the star positions in each row. While tactile signals near the star positions are very similar, the resulting contact patches are very different. This highlights the partial observability of the underlying contact formation, indicating that a single tactile observation may not be sufficient to localize the contact formation. This ambiguity makes training of a machine learning model very difficult because similar inputs (i.e., tactile observations) can lead to totally different outputs (i.e., contact patches)."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5.3",
|
| 73 |
+
"parent_section_id": "5",
|
| 74 |
+
"section_name": "Contact Patch Estimation",
|
| 75 |
+
"text": "Settings.\nNext, we compare the performance of the contact patch estimation on different input modalities. We train the model for each top object using the dataset collected in Sec. V-B ###reference_###, and we evaluate the model using the intersection-over-union (IoU) and binary classification metric. We compare the performance with three different input modalities, a F/T sensor, tactile sensors, and the combination of the two denoted as FT, Tac, and FT+Tac, respectively.\nThe evaluation is carried out using unseen two Bandu pieces (see Fig. 4 ###reference_###), which we denote as Short and Long. We used a 3D-printed jig to ensure that the robot always grasps the same position of the top object and collected interactions with random displacements.\nResults and Analysis.\nThe results are presented in Table I ###reference_###. When comparing the three modalities, we can clearly see that the combination of tactile sensors and the F/T sensor (FT+Tac) yields the best performance. Consequently, for our subsequent experiments, we will utilize both of these modalities.\nHowever, it should be noted that the model is not confident enough to estimate the contact patch. This is because the same tactile signals can lead to different contact patches, as discussed in Sec. V-B ###reference_###. Therefore, in the next experiment, we will aggregate information from multiple interactions and compare performance in stability estimation."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "5.4",
|
| 79 |
+
"parent_section_id": "5",
|
| 80 |
+
"section_name": "Stability Estimation",
|
| 81 |
+
"text": "Settings.\nNext, we assess the stability estimation performance of the proposed method. We reuse the same data as used in the previous experiments with an additional binary label indicating whether the current configuration is stable by checking whether the geometric center of the bottom surface of the grasped object (i.e., the projection of the center of mass of the grasped object on the bottom surface) lies inside the contact patch.\nWe compare our method with a baseline model that directly produces the stability probability by replacing the final layer of with a fully connected layer with a single unit and sigmoid activation. We name it Implicit because it implicitly estimates stability, while our framework explicitly predicts it through the estimated contact patch.\nResults and Analysis.\nTable II ###reference_### shows the qualitative results. Single interaction leads to poor performance, as seen in the results of the baseline (Implicit) as well as our method with single interaction (Ours ).\nHowever, by aggregating the estimates of multiple interactions, the stability estimation performance improves significantly, leading to an average accuracy of %.\nFigure 6 ###reference_### shows how the probability of a contact patch changes during interactions. It shows that the method corrects the initial inaccurate estimate and improves accuracy with additional interactions, and the method finally reconstructs the contact surface of the bottom object with reasonable accuracy.\n###figure_5###"
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5.5",
|
| 85 |
+
"parent_section_id": "5",
|
| 86 |
+
"section_name": "Stacking",
|
| 87 |
+
"text": "Settings.\nFinally, we evaluate the stacking performance of the method. We always initialize the first interaction from an unstable contact state (i.e., the object would topple upon release of grasp). We run the method times for each piece and evaluate whether the robot successfully places the piece in a stable configuration. Furthermore, we also test the method in a harder scenario, where the Long piece is already stacked onto the Short piece (see Fig. 4 ###reference_### for the definition of the pieces), and we stack a top piece on top of these two objects.\nWe compare our method with a Pick & Place baseline, where it releases the piece without estimating the stability.\nResults and Analysis.\nTable III ###reference_### shows the results. The pick-and-place baseline fails in all trials. The proposed method improves performance by predicting the contact patch at each iteration and aggregating information to improve the estimation accuracy. Although the success rate drops when the number of bottom objects is increased, the method can still succeed with a success rate of around %. Figure 7 ###reference_### shows a qualitative result of how it moves to the more stable position.\n###figure_6###"
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "6",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "VI Conclusion",
|
| 93 |
+
"text": "Designing systems that can interpret and disentangle useful contact information from observed tactile measurements is the key to precise and fine manipulation.\nWe proposed a framework for estimating extrinsic contact patches from tactile and force-torque measurements. Contact patch estimation allows us to estimate the stability of the placement of several different objects in novel and unstable environments. We tested the proposed approach for the placement of several pieces of the game of Bandu, which is known to be a difficult stacking task. In the future, we would like to improve the performance by training on a wider variety of objects and relaxing the assumption of the known geometry so that the trained model can be used for the stacking task with arbitrary objects."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [],
|
| 97 |
+
"tables": {
|
| 98 |
+
"1": {
|
| 99 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Comparison of the contact patch estimation performance on different input modalities measured by IoU and binary classification accuracy. Bold numbers show the best results among the three different input modalities. The <span class=\"ltx_text ltx_font_italic\" id=\"S5.T1.41.1\">S</span> and <span class=\"ltx_text ltx_font_italic\" id=\"S5.T1.42.2\">L</span> of the bottom objects correspond to the <span class=\"ltx_text ltx_font_italic\" id=\"S5.T1.43.3\">Short</span> and <span class=\"ltx_text ltx_font_italic\" id=\"S5.T1.44.4\">Long</span> objects, respectively (see Fig.\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2309.14552v2#S5.F4\" title=\"Figure 4 \u2023 V-A Settings \u2023 V Experiments \u2023 Tactile Estimation of Extrinsic Contact Patch for Stable Placement\"><span class=\"ltx_text ltx_ref_tag\">4</span></a>).</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T1.36\" style=\"width:159.4pt;height:129.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-8.9pt,7.2pt) scale(0.9,0.9) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.36.36\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.36.36.37.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T1.36.36.37.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T1.36.36.37.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.36.36.37.1.3\">Mushroom</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.36.36.37.1.4\">Barrel</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T1.36.36.37.1.5\">Pot</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.36.36.38.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T1.36.36.38.2.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row\" id=\"S5.T1.36.36.38.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.36.36.38.2.3\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.36.36.38.2.4\">L</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.36.36.38.2.5\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.36.36.38.2.6\">L</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.36.36.38.2.7\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T1.36.36.38.2.8\">L</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6.6\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.6.6.6.7\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T1.6.6.6.7.1\">IoU</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.6.6.6.8\">FT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.4.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.5.5.5.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.6.6.6.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.12.12.12\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T1.12.12.12.7\">Tac</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.9.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.10.10.10.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.11.11.11.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.12.12.12.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.18.18.18\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T1.18.18.18.7\">FT+Tac</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.15.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.16.16.16.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.17.17.17.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.18.18.18.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.24.24.24\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T1.24.24.24.7\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T1.24.24.24.7.1\">Acc</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.24.24.24.8\">FT</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.19.19.19.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.20.20.20.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.21.21.21.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.22.22.22.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.23.23.23.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.24.24.24.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.30.30.30\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T1.30.30.30.7\">Tac</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.25.25.25.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.26.26.26.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.27.27.27.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.28.28.28.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.29.29.29.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.30.30.30.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.36.36.36\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T1.36.36.36.7\">FT+Tac</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.31.31.31.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.32.32.32.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.33.33.33.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.34.34.34.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.35.35.35.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.36.36.36.6\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 100 |
+
"capture": "TABLE I: Comparison of the contact patch estimation performance on different input modalities measured by IoU and binary classification accuracy. Bold numbers show the best results among the three different input modalities. The S and L of the bottom objects correspond to the Short and Long objects, respectively (see Fig.\u00a04)."
|
| 101 |
+
},
|
| 102 |
+
"2": {
|
| 103 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Stability estimation performance measured by binary accuracy. indicates the number of interactions and bold numbers show the best results.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.29\" style=\"width:163.8pt;height:104.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-2.5pt,1.6pt) scale(0.97,0.97) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.29.27\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.29.27.28.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T2.29.27.28.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T2.29.27.28.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T2.29.27.28.1.3\">Mushroom</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T2.29.27.28.1.4\">Barrel</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T2.29.27.28.1.5\">Pot</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.29.27.29.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T2.29.27.29.2.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row\" id=\"S5.T2.29.27.29.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.29.27.29.2.3\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.29.27.29.2.4\">L</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.29.27.29.2.5\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.29.27.29.2.6\">L</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.29.27.29.2.7\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T2.29.27.29.2.8\">L</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.8.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" colspan=\"2\" id=\"S5.T2.8.6.6.7\">Implicit</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.3.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.4.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.5.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.4.4.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.7.5.5.5\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.8.6.6.6\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.15.13.13\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T2.15.13.13.8\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T2.15.13.13.8.1\">Ours</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.9.7.7.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.10.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.11.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.10.10.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.13.11.11.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.12.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.15.13.13.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.22.20.20\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T2.16.14.14.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.17.15.15.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.18.16.16.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.19.17.17.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.20.18.18.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.21.19.19.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.22.20.20.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.29.27.27\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T2.23.21.21.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.24.22.22.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.25.23.23.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.26.24.24.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.27.25.25.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.28.26.26.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.29.27.27.7\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 104 |
+
"capture": "TABLE II: Stability estimation performance measured by binary accuracy. indicates the number of interactions and bold numbers show the best results."
|
| 105 |
+
},
|
| 106 |
+
"3": {
|
| 107 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Success rate of stacking. One and Two means stacking on top of a single and two objects, respectively.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T3.18\" style=\"width:218.0pt;height:85.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-5.7pt,2.3pt) scale(0.95,0.95) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.18.18\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.18.18.19.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S5.T3.18.18.19.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T3.18.18.19.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T3.18.18.19.1.3\">Mushroom</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T3.18.18.19.1.4\">Barrel</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S5.T3.18.18.19.1.5\">Pot</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.18.18.20.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.T3.18.18.20.2.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row\" id=\"S5.T3.18.18.20.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.18.18.20.2.3\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.18.18.20.2.4\">L</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.18.18.20.2.5\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.18.18.20.2.6\">L</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.18.18.20.2.7\">S</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S5.T3.18.18.20.2.8\">L</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.6.6.6\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.6.6.6.7\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T3.6.6.6.7.1\">One</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.6.6.6.8\">Pick & Place</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.3.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.4.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.5.5.5.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.6.6.6.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.12.12.12\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T3.12.12.12.7\">Ours</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.9.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.10.10.10.4\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.11.11.11.5\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.12.12.12.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.18.18.18\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T3.18.18.18.7\">Two</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T3.18.18.18.8\">Ours</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.14.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.15.15.15.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.16.16.16.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.17.17.17.5\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T3.18.18.18.6\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
|
| 108 |
+
"capture": "TABLE III: Success rate of stacking. One and Two means stacking on top of a single and two objects, respectively."
|
| 109 |
+
}
|
| 110 |
+
},
|
| 111 |
+
"image_paths": {
|
| 112 |
+
"1": {
|
| 113 |
+
"figure_path": "2309.14552v2_figure_1.png",
|
| 114 |
+
"caption": "Figure 2: Pipeline: Our method comprises four components. First, a robot probes the environment to establish contact between the grasped object and the target object upon which it must be stacked.\nDuring this probing phase, we acquire a sequence of force/torque measurements and tactile images.\nWe then estimate the extrinsic contact patch and, in turn, the potential stability of the resultant configuration.\nSubsequently, we aggregate the information from multiple interactions to update the belief map of the contact state.\nWe pick the action that maximizes the contact patch between the objects.",
|
| 115 |
+
"url": "http://arxiv.org/html/2309.14552v2/x2.png"
|
| 116 |
+
},
|
| 117 |
+
"2": {
|
| 118 |
+
"figure_path": "2309.14552v2_figure_2.png",
|
| 119 |
+
"caption": "Figure 3: Definition of the probabilistic contact patch. (Left) The displacement (x,y)\ud835\udc65\ud835\udc66(x,y)( italic_x , italic_y ) is added from the origin of the bottom object O\ud835\udc42Oitalic_O during data collection. This displacement and known contact surfaces of the two objects give the ground-truth contact surface S\ud835\udc46Sitalic_S. (Right) The discretized contact patch S^^\ud835\udc46\\hat{S}over^ start_ARG italic_S end_ARG consists of a set of probabilities p\u2062(sj)\ud835\udc5dsubscript\ud835\udc60\ud835\udc57p(s_{j})italic_p ( italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) that represents whether a specific position sjsubscript\ud835\udc60\ud835\udc57s_{j}italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT of the contact surface of the grasped object is in contact or not.",
|
| 120 |
+
"url": "http://arxiv.org/html/2309.14552v2/x3.png"
|
| 121 |
+
},
|
| 122 |
+
"3": {
|
| 123 |
+
"figure_path": "2309.14552v2_figure_3.png",
|
| 124 |
+
"caption": "Figure 4: The 3D printed board and Bandu pieces used in our experiments. (a) We use the 3D printed board for training data collection. The board includes small and large circles with diameters of 15151515 and 25252525 mm and one square whose length is 15151515 mm. (b) The first two pieces on the left serve as the bottom objects (or the environment), while the subsequent three on the right are designated as the grasped (top) objects. These pieces have been assigned the following names: Short, Long, Mushroom, Barrel, and Pot from left to right.",
|
| 125 |
+
"url": "http://arxiv.org/html/2309.14552v2/x4.png"
|
| 126 |
+
},
|
| 127 |
+
"4": {
|
| 128 |
+
"figure_path": "2309.14552v2_figure_4.png",
|
| 129 |
+
"caption": "Figure 5: Distribution of contact patches:\n(a) Training data distribution with Pot as the grasped object and three different 3D printed shapes as the bottom objects (see Fig. 4). Each row shows the data obtained from different primitive shapes and each column shows the distribution of different data types: tactile displacements on the X\u2062Y\ud835\udc4b\ud835\udc4cXYitalic_X italic_Y axes (only shows the maximum absolute values from all 63 tracking markers), moments on the X\u2062Y\ud835\udc4b\ud835\udc4cXYitalic_X italic_Y axes and force Fzsubscript\ud835\udc39\ud835\udc67F_{z}italic_F start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT. The horizontal and vertical axes show the displacements randomly added during data collection (see Fig. 3), and the black circle or rectangle in each graph shows the contour of the bottom object.\n(b) Example contact patch sampled from the star points (\u2605\u2605\\bigstar\u2605) in the left distributions. Although these contact patches are very different, the tactile signals look quite similar as seen in the data around the star point, showing the difficulty of the task; i.e., similar tactile signals can lead to very different contact patches.",
|
| 130 |
+
"url": "http://arxiv.org/html/2309.14552v2/x5.png"
|
| 131 |
+
},
|
| 132 |
+
"5": {
|
| 133 |
+
"figure_path": "2309.14552v2_figure_5.png",
|
| 134 |
+
"caption": "Figure 6: An example of how the proposed method aggregates multiple estimations and updates contact probability map. The circle in a solid line shows the ground-truth contour of the bottom object. While the initial estimate (n=1\ud835\udc5b1n=1italic_n = 1) is incorrect, the estimation accuracy monotonically improves with multiple interactions (n=3,5\ud835\udc5b35n=3,5italic_n = 3 , 5).",
|
| 135 |
+
"url": "http://arxiv.org/html/2309.14552v2/x6.png"
|
| 136 |
+
},
|
| 137 |
+
"6": {
|
| 138 |
+
"figure_path": "2309.14552v2_figure_6.png",
|
| 139 |
+
"caption": "Figure 7: The robot moves towards a stable configuration and successfully stacks the Barrel piece on top of an already built tower consisting of Short and Long.",
|
| 140 |
+
"url": "http://arxiv.org/html/2309.14552v2/x7.png"
|
| 141 |
+
}
|
| 142 |
+
},
|
| 143 |
+
"validation": true,
|
| 144 |
+
"references": [],
|
| 145 |
+
"url": "http://arxiv.org/html/2309.14552v2"
|
| 146 |
+
}
|
20240323/2309.14945v2.json
ADDED
|
@@ -0,0 +1,624 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Integration of Large Language Models within Cognitive Architectures for Planning and Reasoning in Autonomous Robots",
|
| 3 |
+
"abstract": "Symbolic reasoning systems have been used in cognitive architectures to provide inference and planning capabilities. However, defining domains and problems has proven difficult and prone to errors. Moreover, Large Language Models (LLMs) have emerged as tools to process natural language for different tasks. In this paper, we propose the use of LLMs to tackle these problems. This way, this paper proposes the integration of LLMs in the ROS 2-integrated cognitive architecture MERLIN2 for autonomous robots. Specifically, we present the design, development and deployment of how to leverage the reasoning capabilities of LLMs inside the deliberative processes of MERLIN2. As a result, the deliberative system is updated from a PDDL-based planner system to a natural language planning system. This proposal is evaluated quantitatively and qualitatively, measuring the impact of incorporating the LLMs in the cognitive architecture. Results show that a classical approach achieves better performance but the proposed solution provides an enhanced interaction through natural language.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Symbolic reasoning systems have long served as integral components in cognitive architectures for robotics, offering capabilities in inference and planning. Nevertheless, symbolic systems rely on predefined rules, leading to difficulties with complexity and adaptability. In contrast, the advent of Large Language Models (LLMs) has introduced new avenues for natural language processing across various tasks. These models, exemplified by their application in problem-solving contexts [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###], leverage extensive textual resources to tackle complex problems effectively, offering access to a diverse range of information and guidance.\nTherefore, using LLMs in robotics can bring several benefits and capabilities. They have been used for natural language interaction [6 ###reference_b6###, 7 ###reference_b7###] enabling robots to understand and generate human language, making it easier for users to communicate with and control robots with natural language. It also simplifies the knowledge retrieval [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###] since LLMs have vast knowledge repositories, which can be used by robots to access information, answer questions, or provide explanations to users. Additionally, they can also be used in explainability and interpretability [11 ###reference_b11###] for robotics, leveraging the narrative capabilities of the models to explain and interpret the logs generated by robots.\nLLMs have recently achieved leading performance in tasks involving arithmetic and reasoning using the method of prompting known as chain-of-thought (CoT) [12 ###reference_b12###], which encourages complex multi-step reasoning by providing step-by-step answer examples. Studies such as [13 ###reference_b13###] recently achieved significant performance in such tasks using LLMs that are capable of zero-shot reasoning when it is added the instruction \u201dLet us consider this step by step\u201d before each answer part included in the prompt.\nBehavior generation in autonomous robots is still an open problem that has been faced by different paradigms. Within them, we can find the deliberative architectures [14 ###reference_b14###] that use planning approaches, the subsumption architectures [15 ###reference_b15###], and the reactive architectures [16 ###reference_b16###]. The combination of these paradigms produces hybrid architectures [17 ###reference_b17###], which significantly impacts the robotics community.\nOur proposal focuses on reasoning, presenting the integration of LLMs in our existing cognitive architecture. The proposed approach seeks to update the current deliberative module, which is based on a symbolic planner, by one based on LLMs. Therefore, this paper analyzes the use of llama_ros, available in [18 ###reference_b18###], in the MERLIN2 [19 ###reference_b19###], cognitive architecture. It allows using offline LLMs inside robotics systems. Thus, its use is proposed as an alternative to classic deliberative models based on symbolic knowledge expressed in PDDL. Finally, a quantitative and qualitative review of its effect and impact on the overall decision-making system is also described in the paper.\nContribution: The main contribution of this research is the contextualization and evaluation of using an LLM as a reasoning system inside a cognitive architecture.\nIn robotics, reasoning [20 ###reference_b20###] is the capability of logically and systematically processing the knowledge of the robot. There are several types of reasoning [21 ###reference_b21###], for instance, practical reasoning or planning, to find the next best action and perform it; and theoretical reasoning, which aims at establishing or evaluating beliefs. The reasoning approach that is faced in this work is closer to planning type, generating sequences of actions to achieve the robot\u2019s goals; and evaluating whether a specific goal is already achieved.\nThe evaluation of this proposal has been made in a human-robot interaction environment carried out in a mock-up apartment using a service robot. The reasoning faced in these missions consists of planning and evaluating whether a specific condition will be satisfied following the execution of a sequence of actions that commences from an initial state. Thus, we initially present this proposal as the implications of using LLMs in cognitive architectures. In other words, LLMs can be used in service robots although some could not see them as a formal reasoning mechanism.\nThe rest of the paper is organized as follows. Section II ###reference_### reviews the state of the art and the background. Section III ###reference_### depicts the material and methods of this work, that is the llama_ros tool, the cognitive architecture MERLIN2, the integration process, and the hardware and software setup. Section IV ###reference_### presents the evaluation of this work, composed of the experiment setup, the results and the discussions. Finally, Section V ###reference_### presents the conclusions and the future works."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Background and Related Work",
|
| 15 |
+
"text": "In recent years, months, and weeks, not only the scientific community has seen remarkable progress in the field of generative AI and large language models (LLMs) but also the general public, generating a perception that it could be used everywhere. This section will review LLMs and their use for inference to current models.\nLLMs. A Large Language Model (LLM) is an artificial intelligence system trained on vast amounts of text data to understand and generate human language. The release of chatGPT [22 ###reference_b22###] (GPT-3.5 and GPT-4). However, the release of LLaMA [23 ###reference_b23###] and LLaMA2 [24 ###reference_b24###], in their different sizes, 7B, 13B, 33B, and 65B; has marked a new age of LLMs since it allows researchers to train models on custom datasets.\nNevertheless, LLMs require substantial computational resources for inference and deployment. However, managing the computational burden is a significant challenge in resource-constrained environments like embedded systems within robots. Quantization [25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###], in this context, refers to the process of reducing the precision of the model\u2019s parameters and activations, typically from floating-point numbers to fixed-point numbers. By doing so, quantization dramatically reduces LLMs\u2019 memory and computational requirements, making them feasible for deployment in robots with limited processing power and memory. It is essential since it enables robots to leverage the power of LLMs for natural language understanding, decision-making, and interaction while operating efficiently within their constrained hardware environments.\nThe LLaMA models and the quantization methods allow the proliferation of a significant number of LLMs that can be deployed in personal computers and embedded systems with tools like llama.cpp [29 ###reference_b29###]. LLMs continue to evolve rapidly with the introduction of innovative models such as Alpaca [30 ###reference_b30###], Vicuna [31 ###reference_b31###], WizardLM [32 ###reference_b32###], Nous-Hermes [33 ###reference_b33###], and Marcoroni [34 ###reference_b34###], which contribute to the growing arsenal of robust language understanding and generation tools.\nDeliberative and reasoning capabilities of LLMs. As LLMs have grown increasingly sophisticated and capable, their ability to engage in meaningful deliberation and planning has become a research subject. Deliberation and planning entail carefully considering various options, arguments, and perspectives before deciding and strategically organizing actions or steps to achieve a particular. Some works have attempted to use LLMs as planners. For instance, [35 ###reference_b35###] uses PDDL for planning, while [36 ###reference_b36###] explores using few-shot planning in embodied agents like robots.\nDespite pre-trained models being widely recognized for their remarkable few-shot learning abilities in various natural language processing tasks, a recent prompting technique called chain-of-thought (CoT) [12 ###reference_b12###] has achieved state-of-the-art performance. In [13 ###reference_b13###], it has been proved that LLMs can also excel as zero-shot reasoners. This technique has been expanded by applying a search algorithm for better results. The tree-of-thought [37 ###reference_b37###] allows LLMs to perform deliberate decision-making by considering different reasoning paths, self-evaluating them, and deciding the next course of action. Another case is the graph-of-thought [38 ###reference_b38###] that is similar to the previous case but distributes the possible paths in a graph format instead of a tree.\nIn robotics, we can find more works that tried to perform PDDL planning with pre-trained LLMs [39 ###reference_b39###]. More advanced research like ProgPrompt [40 ###reference_b40###] enables plan generation through a programmatic LLM prompt structure. However, LLMs are rarely used within cognitive architectures.\nCognitive Architectures. Cognitive architectures serve as the foundational framework for autonomous robots, guiding their perception, decision-making, and action execution. These architectures can be broadly categorized into several classes, each offering unique advantages and characteristics tailored to specific robotic applications. Its use allows us to understand the relationship between the knowledge, the perception, and the action of such a robot.\nA taxonomy of cognitive architectures is posed in the literature [21 ###reference_b21###, 41 ###reference_b41###]. There are three categories: symbolic architectures, similar to deliberative architectures; emergent architectures, which replace reactive architectures and emphasize the connectionist concept; and hybrid architectures.\nThe most extended cognitive architecture category is the hybrid approach. For instance, HiMoP hybrid architecture is proposed in our previous works [42 ###reference_b42###]. It is composed of a deliberative system that uses PDDL (Planning Domain Definition Language) [43 ###reference_b43###] to represent the knowledge of the robot; a reactive system with a pool of state machines; and a motivational system, that contains all the robot needs. The world state of the robot is defined using the PDDL, which is the common language to carry out planning in robotics, while the state machines are used to implement the actions that the robot can perform.\nAlternatively, there are different technical approaches such as MOBAR [44 ###reference_b44###] and CORTEX [45 ###reference_b45###], where the knowledge of the robot is represented by a knowledge graph that holds symbolic and geometric information within the same structure or Gin\u00e9s et al. [46 ###reference_b46###, 47 ###reference_b47###], where the reactive system is based on behavior trees [48 ###reference_b48###]. These are the architectures that have guided the development of MERLIN2 [19 ###reference_b19###], the one used in this paper.\n###figure_1###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Materials and Methods",
|
| 21 |
+
"text": "This section lays the groundwork for the detailed integration of LLMs into the cognitive architecture known as MERLIN2. We delve into the llama_ros tool, which allows for integrating LLMs into ROS 2 [49 ###reference_b49###]. Additionally, we provide an overview of the MERLIN2 architecture, discuss how LLMs are integrated into MERLIN2, and explain how these models enhance the reasoning capabilities within the system. We also cover the technical setup, including both hardware and software components."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "III-A llama_ros",
|
| 27 |
+
"text": "The llama_ros tool [18 ###reference_b18###], available in a public repository111https://github.com/mgonzs13/llama_ros ###reference_###, integrates llama.cpp [29 ###reference_b29###] into ROS 2 through a suite of packages. It enables the execution of LLaMA-based models and other LLMs on devices with a low computing performance using integer quantization. This is achieved through a C/C++ implementation, facilitating model deployment across various platforms, including GPU support.\nThe llama_ros tool introduces a ROS 2 node, offering primary LLM functionalities such as:\nResponse generation: similar to applications like ChatGPT, llama_ros can generate responses to prompts from humans or other ROS 2 nodes. This is done using a ROS 2 action server.\nTokenize: a ROS 2 service for text tokenization, essential for the model\u2019s language processing. Tokens, which may represent characters, words, or other textual elements, are processed into sequences for the model.\nEmbeddings: another ROS 2 service produces text embeddings. These embeddings represent tokens in a high-dimensional space, each dimension capturing different linguistic features. They are critical for understanding and distinguishing language components.\nThese interfaces are particularly valuable for advanced prompt engineering methods [50 ###reference_b50###], such as using the embedding service to transform text into vectors for a database. This database supports Retrieval Augmented Generation (RAG) [51 ###reference_b51###] by allowing the retrieval of vectors related to specific texts, leading to the creation of more precise prompts.\nFurthermore, to enhance prompt engineering, llama_ros has been incorporated into LangChain [52 ###reference_b52###], a platform that eases the development of LLM applications. Integrating the aforementioned interfaces with LangChain enables the utilization of this framework within ROS 2 environments."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "III-B MERLIN2",
|
| 33 |
+
"text": "Cognitive architectures enable robots to execute complex behaviors. In this research, we use MERLIN2 [19 ###reference_b19###, 53 ###reference_b53###], a hybrid cognitive architecture integrated into ROS 2 and designed for autonomous robots, which is accessible in a public repository222https://github.com/MERLIN2-ARCH/merlin2 ###reference_###. Figure 1 ###reference_### depicts MERLIN2. Drawing on methodologies from existing literature [42 ###reference_b42###, 44 ###reference_b44###, 45 ###reference_b45###, 46 ###reference_b46###], MERLIN2 incorporates symbolic knowledge to model the robot\u2019s world, a deliberative system for planning to achieve goals, state machines for immediate behaviors, and emergent modules for object recognition, and both speech recognition and synthesis. These elements split into two systems: the Deliberative and the Behavioural Systems.\nThe Deliberative System manages high-level tasks and gathers the Mission Layer and the Planning Layer. The Mission Layer sets high-level objectives for the robot, using a state machine to generate these goals. The Planning Layer, based on traditional deliberative approaches, maintains a symbolic knowledge base in PDDL format to create action plans that fulfill the robot\u2019s goals, employing established PDDL planners like POPF [54 ###reference_b54###]. Control over the Planning Layer is exercised through a state machine built with YASMIN [55 ###reference_b55###] (shown in Figure 2 ###reference_###). This machine is in charge of generating PDDL from the knowledge base, crafting the plan with the planner, and executing the planned actions.\n###figure_2### The Behavioural System collects another two layers: the Executive and the Reactive Layers. The Executive Layer orchestrates the robot\u2019s immediate actions, leveraging its skills for short-term behaviors. These actions can be structured using either state machines or behavior trees for organization and execution. On the other hand, the Reactive Layer consolidates the robot\u2019s skills, encompassing capabilities like navigation, text-to-speech, and speech-to-text recognition."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.3",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-C Integrating LLMs into MERLIN2",
|
| 39 |
+
"text": "Incorporating LLMs into MERLIN2 aims to use their advanced reasoning abilities, transforming the existing Planning Layer. This modification involves substituting the current symbolic elements \u2013specifically, the PDDL planner and the knowledge base, as illustrated in Figure 1 ###reference_### \u2013 with LLM-driven components.\n###figure_3### The redefined layer is managed by a YASMIN state machine, depicted in Figure 3 ###reference_###. This setup features two nested state machines \u2013 PLANNING and CHECKING_GOAL \u2013 and a singular state, EXECUTING_PLAN. Within this structure, one nested state machine is dedicated to generating plans to meet the robot\u2019s goals, another to verify the accomplishment of these goals, and a separate state is in charge of the execution of the plans."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.3.1",
|
| 43 |
+
"parent_section_id": "3.3",
|
| 44 |
+
"section_name": "III-C1 Knowledge Graph",
|
| 45 |
+
"text": "The knowledge base in MERLIN2 is now replaced with a knowledge graph, accessible at a public repository333https://github.com/mgonzs13/knowledge_graph ###reference_h###, building on approaches from prior studies [45 ###reference_b45###, 46 ###reference_b46###, 47 ###reference_b47###]. This shift enhances how knowledge is organized through the graph\u2019s structure. Furthermore, employing a knowledge graph to encapsulate the robot\u2019s knowledge not only refines the cognitive architecture\u2019s practicality but also facilitates quick comprehension of the robot\u2019s knowledge base by human operators."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3.2",
|
| 49 |
+
"parent_section_id": "3.3",
|
| 50 |
+
"section_name": "III-C2 Generating the World State",
|
| 51 |
+
"text": "The world state is an intermediary form of knowledge representation, derived from the robot\u2019s knowledge graph by translating graph data into discrete knowledge items. This form is particularly suited for inclusion in the prompts directed towards the LLM.\nThe structure of the knowledge items is detailed as follows: nodes are represented by the format \u201cnode is a type (properties)\u201d and edges adhere to the schema \u201cnode relationship node (properties)\u201d. The \u201c(properties)\u201d segment contains key-value pairs encapsulating the a information associated with a node or an edge, such as waypoint coordinates.\nIncorporating RAG enhances the creation of the world state, converting knowledge items into vectors and storing them within a vector database. Then, retrieval techniques are employed to selectively access knowledge items related to the robot\u2019s current goal. This process uses ChromaDB [56 ###reference_b56###], a high-performance in-memory vector database, for efficient management and retrieval of vectorized knowledge items.\n###figure_4###"
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.3.3",
|
| 55 |
+
"parent_section_id": "3.3",
|
| 56 |
+
"section_name": "III-C3 Planning",
|
| 57 |
+
"text": "The PLANNING state machine creates plans addressing the robot\u2019s objectives, aligning with the original function of generating PDDL plans. The entire planning process utilizing LLMs is illustrated in Figure 4 ###reference_###.\nFirst, the robot\u2019s world state is compiled through RAG, selectively gathering knowledge pertinent to the goal. This world state, alongside the goal, forms the basis of a planning prompt that enables the LLM to function as a planner. The prompt encapsulates the robot\u2019s possible actions, its current world state, and its goal.\nBy using zero-shot CoT [13 ###reference_b13###], the LLM is then engaged via LangChain and llama_ros to formulate a plan that meets the specified goal. Additionally, a grammar employing Backus-Naur Form (BNF) [57 ###reference_b57###] constrains the LLM\u2019s output to JSON444https://github.com/ggerganov/llama.cpp/blob/master/grammars/json.gbnf ###reference_b/master/grammars/json.gbnf###, simplifying parsing efforts. Lastly, the plan\u2019s validity is assessed by verifying the accuracy of the response format."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.3.4",
|
| 61 |
+
"parent_section_id": "3.3",
|
| 62 |
+
"section_name": "III-C4 Executing the Plan",
|
| 63 |
+
"text": "After the plan\u2019s creation, the EXECUTING_PLAN state, analogous to the original state, DISPATCHING_PLAN, designated for executing formulated plans, runs the actions delineated by the LLM. The execution of each action has the potential to update the knowledge graph with its outcomes."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.3.5",
|
| 67 |
+
"parent_section_id": "3.3",
|
| 68 |
+
"section_name": "III-C5 Checking Goal",
|
| 69 |
+
"text": "After the plan is executed, the CHECKING_GOAL state machine initiates. Similar to the process of the PLANNING state machine, it generates the world state and engages the LLM via LangChain by using its zero-shot reasoning abilities to assess if the goal has been met. The prompt for verifying goal achievement gathers both the current world state and the robot\u2019s intended goal. Additionally, this state generates a rationale, detailing whether the goal has been fulfilled or not.\nShould the LLM conclude that the objective remains unattained, the entire state machine will reinitiate the planning process, this time incorporating the rationale behind the unmet goal to refine and enhance the subsequent plan generation."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.4",
|
| 73 |
+
"parent_section_id": "3",
|
| 74 |
+
"section_name": "III-D Hardware and Software Setup",
|
| 75 |
+
"text": "For the experiments conducted in this study, we utilized a computer outfitted with an Intel(R) i9-13980HX processor, 32 GB of RAM, and an RTX 4070 Nvidia graphics card. The operating system and framework of choice were Ubuntu 22.04 and ROS 2 Humble, respectively. The llama_ros component was set up to employ a pretrained 4-bit quantized 13B Marcoroni LLM, optimized for GPU performance via cuBLAS. This setup involved activating 30 out of the total 43 layers for processing."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "IV Evaluation",
|
| 81 |
+
"text": "This section presents the evaluation of MERLIN2 after integrating LLMs. It is based on comparing the resulting architecture with its original version MERLIN2. First, the experiment setup and the metrics are presented. Then, the results and the discussion are presented."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.1",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-A Experimental Setup",
|
| 87 |
+
"text": "The evaluation method is similar to the one depicted in MERLIN [53 ###reference_b53###]. Thus, in this work, the experiments consist of greeting people in the rooms of an apartment using the RB1 robot, that is a service robot composed of a differential mobile base, with a LiDAR; and a torso, with an RGB-D camera and speakers. As a result, the robot must be able to navigate through the apartment and speak to people. Figure 5 ###reference_### shows the simulated apartment and the navigation map reused in this experiment. Moreover, the metrics used are the execution time, in seconds, required by the robot to complete the missions; and the traveled distance, in meters, covered by the robot during the execution of the missions.\n###figure_5### ###figure_6### Based on the presented missions, two experiments have been designed. Both of them are based on greeting people missions, but they differ in the number of missions. For each experiment, a randomized list of people that the robot needs to greet is created. Besides, half of the missions are canceled after 10 seconds of starting to evaluate the impact of switching missions. Hence, the first experiment comprises 6 missions, meaning greeting 6 times, of which 3 of them are canceled. The second involves 20 missions, that is, greeting 20 times, of which 10 of them are canceled.\nFor these experiments, the actions implemented for the experiments are the navigation and the greeting actions. The navigation action uses Nav2 [58 ###reference_b58###] to move the robot between the waypoints. Thus, this action receives as arguments: the robot name, the source waypoint and the target waypoint. The greeting action uses a text-to-speak tool to greet the four persons. Its arguments are the robot\u2019s name, the person\u2019s name and the waypoint of the person.\nOn the other hand, the initial knowledge graph for the experiment is presented in Figure 6 ###reference_###. It shows that the apartment is named GrannyHouse and has four rooms which corresponds with the marked waypoints in the map of Figure 5 ###reference_###. The rooms are the entrance, bathroom, bedroom and living room. Besides, there are four people in different rooms, as shown in the graph. Finally, the robot RB1 is at the entrance.\n###figure_7### Finally, the two experiments are executed with the original architecture MERLIN2 and with four different versions of the architecture with the LLMs integration:\nFull Integration (FI): this version uses all the components mentioned before in the integration part. The world state is generated with RAG and a vector database, ChromaDB, and the goal is checked with the CHECKING_GOAL state. The number of retried knowledge items from the vector database is 10.\nNo RAG Integration (NRI): this version is similar to the Full Version but does not use a vector database. As a result, it uses all the knowledge items from the graph as the world states that there are 19 knowledge items.\nNo CHECKING_GOAL State Integration (NCI): this version is similar to the Full Version but without the CHECKING_GOAL State. As a result, the architecture depends only on the planning capability of the LLM.\nNo RAG and No CHECKING_GOAL State Integration (NRNCI): this version uses neither RAG nor the CHECKING_GOAL state."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.2",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "IV-B Results",
|
| 93 |
+
"text": "Table I ###reference_### shows the results for the first experiment, where each execution is composed of 6 goals. MERLIN2 presents the lowest mean time value of 107.728 seconds, while FI presents the most significant time value of 251.625 s. NRI, NCI and NRNCI get time values of 178.476 s, 200.149 s and 166.469 s. On the other hand, MERLIN2 has the highest value of traveled distance, 30.579 m, while the others have similar values of 15.297 m, 17.254 m, 14.540 and 20.797 m.\nTable II ###reference_### shows the results for the second experiment, where each execution is composed of 20 goals. In this experiment, MERLIN2 also presents the lowest mean time value of 257.763 seconds, while FI presents the highest time value of 848.196 s. NRI, NCI and NRNCI obtained time values of 563.292 s, 665.627 s and 508.021 s. On the other hand, MERLIN2 has the most significant value of traveled distance, 62.957 m, while the others have similar values of 53.100 m, 51.723 m, 49.373 m and 53.701 m."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "4.3",
|
| 97 |
+
"parent_section_id": "4",
|
| 98 |
+
"section_name": "IV-C Discussion",
|
| 99 |
+
"text": "The applicability of using a LLM for reasoning processes is evident. The experiment showcases a robot navigating and moving between points, similarly to the process conducted with the traditional PDDL framework.\nYet, when assessing both the time taken and the distance traversed during the mission, the performance under the traditional PDDL framework significantly outshines that of the LLM approach. In the initial experiment, the MERLIN2 architecture demonstrated a performance that was 2.3 times higher than the FI and 1.5 times better than the NRNCI version. Moreover, the outcomes of the second experiment reinforce MERLIN2 as the preferable option for decision-making over the LLM alternatives, being 3.2 times more efficient compared to FI and 1.9 times more effective against NRNCI. Conversely, when comparing NRI and NCI, NCI\u2019s execution time is 1.2 times slower than NRI in both experiments. This discrepancy highlights that employing RAG incurs a higher cost than merely using the LLM to verify goal achievement.\nThe advantage is also noticeable regarding the distance covered. This parameter allows for measuring effective service duration, as longer distances entail greater battery consumption. On average, outcomes based on the LLM depicted a scenario where the robot covered merely half the distance compared to operating under MERLIN2. This discrepancy stems from the LLM versions requiring more time for the planning phase than MERLIN2.\nFinally, throughout the development and integration process, it has been observed that quantized LLMs tend to exhibit a higher sensitivity to prompts compared to their unquantized counterparts. Additionally, several challenges were identified with incorporating LLMs within the cognitive architecture:\nIncreased World State Complexity: for complex environments, the world state \u2013 and consequently, the prompt size \u2013 may expand. Employing RAG can mitigate this issue by streamlining the prompt size, albeit at the cost of extended execution times.\nFeedback from CHECKING_GOAL: although it may prolong the execution period, the feedback obtained from the CHECKING_GOAL process can significantly benefit the planning stage, particularly when re-planning is required by unmet goals. However, it further prolongs the overall execution timeframe."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "5",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "Conclusions",
|
| 105 |
+
"text": "We have presented the integration of LLMs into MERLIN2. Leveraging the LLM\u2019s reasoning capabilities, a new Planning Layer has been built and experimentation has been carried out to evaluate the integration process in human-robot interaction missions within a simulated environment.\nIt is worth discussing the PDDL performance revealed by the results of our experiments, which are superior to LLM in their different approaches. However, part of the community would prefer to interact in a more natural language manner, as is done with the LLM options which is achieved thanks to the quantized LLMs and the llama_ros tool integrated into MERLIN2. As a result, the cognitive architecture can present an improved human-robot interaction degree.\nIn future works, we propose to improve the performance by utilizing smaller and more accurate LLMs and specific embedding models to improve the RAG process. Besides, we will explore the use of graph algorithms to try to optimize the world state instead of using RAG as well as the use of other symbolic techniques like anthologies. Finally, we want to include the new Visual Language Models (VLM) which can be used to extract natural language text from the images captured by the robot. These ongoing efforts can contribute to the evolution and refinement of cognitive architectures incorporating cutting-edge language models."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [],
|
| 109 |
+
"tables": {
|
| 110 |
+
"1": {
|
| 111 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Results for the first experiment with 6 missions.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.50\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.50.51.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T1.50.51.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S4.T1.50.51.1.2\">Execution Time (Seconds)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S4.T1.50.51.1.3\">Traveled Distance (Meters)</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.50.52.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T1.50.52.2.1\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.52.2.2\">MERLIN2</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.52.2.3\">FI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.52.2.4\">NRI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.52.2.5\">NCI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.52.2.6\">NRNCI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.52.2.7\">MERLIN2</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.52.2.8\">FI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.52.2.9\">NRI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.52.2.10\">NCI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.50.52.2.11\">NRNCI</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.10.10.11\">Mean</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.5.5.5\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.6.6.6\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.7.7.7\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.8.8.8\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.9.9.9\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.10.10.10\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.20.20\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.20.20.11\">Std. Deviation</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.12.12.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.13.13.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.14.14.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.15.15.5\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.16.16.6\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.17.17.7\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.18.18.8\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.19.19.9\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.20.20.10\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.30.30\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.30.30.11\">Minimum</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.21.21.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.22.22.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.23.23.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.24.24.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.25.25.5\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.26.26.6\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.27.27.7\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.28.28.8\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.29.29.9\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.30.30.10\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.40.40\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.40.40.11\">Maximum</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.31.31.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.32.32.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.33.33.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.34.34.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.35.35.5\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.36.36.6\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.37.37.7\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.38.38.8\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.39.39.9\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.40.40.10\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.50.50\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T1.50.50.11\">Sum</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.41.41.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.42.42.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.43.43.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.44.44.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.45.45.5\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.46.46.6\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.47.47.7\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.48.48.8\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.49.49.9\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.50.50.10\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 112 |
+
"capture": "TABLE I: Results for the first experiment with 6 missions."
|
| 113 |
+
},
|
| 114 |
+
"2": {
|
| 115 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Results for the second experiment with 20 missions.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.50\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.50.51.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T2.50.51.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S4.T2.50.51.1.2\">Execution Time (Seconds)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"5\" id=\"S4.T2.50.51.1.3\">Traveled Distance (Meters)</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.50.52.2\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S4.T2.50.52.2.1\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.50.52.2.2\">MERLIN2</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.50.52.2.3\">FI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.50.52.2.4\">NRI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.50.52.2.5\">NCI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.50.52.2.6\">NRNCI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.50.52.2.7\">MERLIN2</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.50.52.2.8\">FI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.50.52.2.9\">NEV</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.50.52.2.10\">NCI</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.50.52.2.11\">NRNCI</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.10.10.11\">Mean</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.5.5.5\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.6.6.6\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.7.7.7\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.8.8.8\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.9.9.9\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.10.10.10\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.20.20\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.20.20.11\">Std. Deviation</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.12.12.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.13.13.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.14.14.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.15.15.5\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.16.16.6\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.17.17.7\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.18.18.8\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.19.19.9\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.20.20.10\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.30.30\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.30.30.11\">Minimum</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.21.21.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.22.22.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.23.23.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.24.24.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.25.25.5\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.26.26.6\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.27.7\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.28.28.8\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.29.29.9\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.30.30.10\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.40.40\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.40.40.11\">Maximum</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.31.31.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.32.32.2\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.33.33.3\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.34.34.4\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.35.35.5\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.36.36.6\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.37.37.7\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.38.38.8\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.39.39.9\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.40.40.10\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.50.50\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S4.T2.50.50.11\">Sum</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.41.41.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.42.42.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.43.43.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.44.44.4\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.45.45.5\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.46.46.6\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.47.47.7\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.48.48.8\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.49.49.9\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.50.50.10\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 116 |
+
"capture": "TABLE II: Results for the second experiment with 20 missions."
|
| 117 |
+
}
|
| 118 |
+
},
|
| 119 |
+
"image_paths": {
|
| 120 |
+
"1": {
|
| 121 |
+
"figure_path": "2309.14945v2_figure_1.png",
|
| 122 |
+
"caption": "Figure 1: MERLIN2 architecture diagram. The original architecture is formed by the Deliberative System and the Behavioral System which are divided into four layers, which are Mission Layer, Planning Layer, Executive Layer and Reactive Layer. LLMs are integrated into the Planning Layer by using llama_ros and LangChain. Besides, a knowledge graph replaces the original knowledge base.",
|
| 123 |
+
"url": "http://arxiv.org/html/2309.14945v2/x1.png"
|
| 124 |
+
},
|
| 125 |
+
"2": {
|
| 126 |
+
"figure_path": "2309.14945v2_figure_2.png",
|
| 127 |
+
"caption": "Figure 2: YASMIN state machine in charge of controlling the original Planning Layer of MERLIN2. It has three states, one to generate the PDDL from the knowledge base, another state to generate the plans using PDDL planners and another state to execute the plan.",
|
| 128 |
+
"url": "http://arxiv.org/html/2309.14945v2/extracted/5490669/figures/original_planning.png"
|
| 129 |
+
},
|
| 130 |
+
"3": {
|
| 131 |
+
"figure_path": "2309.14945v2_figure_3.png",
|
| 132 |
+
"caption": "Figure 3: YASMIN state machine in charge of controlling the new Planning Layer obtained after integrating the LLMs into MERLIN2. It has a nested state machine to generate plans for the goals, a state to execute the plan, and another nested state machine to check if the goal is achieved.",
|
| 133 |
+
"url": "http://arxiv.org/html/2309.14945v2/extracted/5490669/figures/agi4ros_fsm.jpeg"
|
| 134 |
+
},
|
| 135 |
+
"4": {
|
| 136 |
+
"figure_path": "2309.14945v2_figure_4.png",
|
| 137 |
+
"caption": "Figure 4: Pipeline to perform the planning using LLMs and RAG inside the resulting cognitive architecture. First, the knowledge of the robot is converted into embeddings employing the LLM. Using the goal, a query is created to retrieve the relevant knowledge, that is the world state. Then, the world state and the goal are used to create the planning prompt which is used to prompt the LLM, through LangChain, to generate the plan.",
|
| 138 |
+
"url": "http://arxiv.org/html/2309.14945v2/extracted/5490669/figures/arquitectura_MERLIN2_planning.png"
|
| 139 |
+
},
|
| 140 |
+
"5(a)": {
|
| 141 |
+
"figure_path": "2309.14945v2_figure_5(a).png",
|
| 142 |
+
"caption": "Figure 5: Gazebo zenith view and map deployed for the experimental validation of the resulting cognitive architecture after integrating LLMs. The map shows the four points that are used to perform the experiment. These points contain a person that the robot must greet in each iteration of the experiment.",
|
| 143 |
+
"url": "http://arxiv.org/html/2309.14945v2/extracted/5490669/figures/granny_gazebo.png"
|
| 144 |
+
},
|
| 145 |
+
"5(b)": {
|
| 146 |
+
"figure_path": "2309.14945v2_figure_5(b).png",
|
| 147 |
+
"caption": "Figure 5: Gazebo zenith view and map deployed for the experimental validation of the resulting cognitive architecture after integrating LLMs. The map shows the four points that are used to perform the experiment. These points contain a person that the robot must greet in each iteration of the experiment.",
|
| 148 |
+
"url": "http://arxiv.org/html/2309.14945v2/extracted/5490669/figures/GrannyAnnie_1.png"
|
| 149 |
+
},
|
| 150 |
+
"6": {
|
| 151 |
+
"figure_path": "2309.14945v2_figure_6.png",
|
| 152 |
+
"caption": "Figure 6: Knowledge graph with for the experiments. It shows an apartment named GrannyHouse with four rooms which are the entrance, the bathroom, the bedroom and the living room. There is a person in each room: Miguel at the entrance, Fran in the bathroom, Angel in the bedroom and Vicente in the living room. Finally, the robot RB1 is at the entrance.",
|
| 153 |
+
"url": "http://arxiv.org/html/2309.14945v2/extracted/5490669/figures/kg_experiment.png"
|
| 154 |
+
}
|
| 155 |
+
},
|
| 156 |
+
"validation": true,
|
| 157 |
+
"references": [
|
| 158 |
+
{
|
| 159 |
+
"1": {
|
| 160 |
+
"title": "Learning to reason over scene graphs: a case study of finetuning gpt-2 into a robot language model for grounded task planning.",
|
| 161 |
+
"author": "Georgia Chalvatzaki, Ali Younes, Daljeet Nandha, An Thai Le, Leonardo F. R. Ribeiro, and Iryna Gurevych.",
|
| 162 |
+
"venue": "Frontiers in Robotics and AI, 10, 2023.",
|
| 163 |
+
"url": null
|
| 164 |
+
}
|
| 165 |
+
},
|
| 166 |
+
{
|
| 167 |
+
"2": {
|
| 168 |
+
"title": "Generalized planning in pddl domains with pretrained large language models.",
|
| 169 |
+
"author": "Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B Tenenbaum, Leslie Pack Kaelbling, and Michael Katz.",
|
| 170 |
+
"venue": "arXiv preprint arXiv:2305.11014, 2023.",
|
| 171 |
+
"url": null
|
| 172 |
+
}
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"3": {
|
| 176 |
+
"title": "Progprompt: Generating situated robot task plans using large language models.",
|
| 177 |
+
"author": "Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg.",
|
| 178 |
+
"venue": "In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11523\u201311530, 2023.",
|
| 179 |
+
"url": null
|
| 180 |
+
}
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"4": {
|
| 184 |
+
"title": "Open-vocabulary queryable scene representations for real world planning.",
|
| 185 |
+
"author": "Boyuan Chen, Fei Xia, Brian Ichter, Kanishka Rao, Keerthana Gopalakrishnan, Michael S. Ryoo, Austin Stone, and Daniel Kappler.",
|
| 186 |
+
"venue": "In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 11509\u201311522, 2023.",
|
| 187 |
+
"url": null
|
| 188 |
+
}
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"5": {
|
| 192 |
+
"title": "Exploring the limitations of using large language models to fix planning tasks.",
|
| 193 |
+
"author": "Alba Gragera and Alberto Pozanco.",
|
| 194 |
+
"venue": "In ICAPS23, 2023.",
|
| 195 |
+
"url": null
|
| 196 |
+
}
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"6": {
|
| 200 |
+
"title": "Rosgpt: Next-generation human-robot interaction with chatgpt and ros, 2023.",
|
| 201 |
+
"author": "Anis Koubaa.",
|
| 202 |
+
"venue": null,
|
| 203 |
+
"url": null
|
| 204 |
+
}
|
| 205 |
+
},
|
| 206 |
+
{
|
| 207 |
+
"7": {
|
| 208 |
+
"title": "Chatsim: Underwater simulation with natural language prompting.",
|
| 209 |
+
"author": "Aadi Palnitkar, Rashmi Kapu, Xiaomin Lin, Cheng Liu, Nare Karapetyan, and Yiannis Aloimonos.",
|
| 210 |
+
"venue": "arXiv preprint arXiv:2308.04029, 2023.",
|
| 211 |
+
"url": null
|
| 212 |
+
}
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"8": {
|
| 216 |
+
"title": "Modelscope-agent: Building your customizable agent system with open-source large language models.",
|
| 217 |
+
"author": "Chenliang Li, Hehong Chen, Ming Yan, Weizhou Shen, Haiyang Xu, Zhikai Wu, Zhicheng Zhang, Wenmeng Zhou, Yingda Chen, Chen Cheng, et al.",
|
| 218 |
+
"venue": "arXiv preprint arXiv:2309.00986, 2023.",
|
| 219 |
+
"url": null
|
| 220 |
+
}
|
| 221 |
+
},
|
| 222 |
+
{
|
| 223 |
+
"9": {
|
| 224 |
+
"title": "X-factr: Multilingual factual knowledge retrieval from pretrained language models.",
|
| 225 |
+
"author": "Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, and Graham Neubig.",
|
| 226 |
+
"venue": "arXiv preprint arXiv:2010.06189, 2020.",
|
| 227 |
+
"url": null
|
| 228 |
+
}
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"10": {
|
| 232 |
+
"title": "Decoding prompt syntax: Analysing its impact on knowledge retrieval in large language models.",
|
| 233 |
+
"author": "Stephan Linzbach, Tim Tressel, Laura Kallmeyer, Stefan Dietze, and Hajira Jabeen.",
|
| 234 |
+
"venue": "In Companion Proceedings of the ACM Web Conference 2023, pages 1145\u20131149, 2023.",
|
| 235 |
+
"url": null
|
| 236 |
+
}
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"11": {
|
| 240 |
+
"title": "Using large language models for interpreting autonomous robots behaviors.",
|
| 241 |
+
"author": "Miguel \u00c1. Gonz\u00e1lez-Santamarta, Laura Fern\u00e1ndez-Becerra, David Sobr\u00edn-Hidalgo, \u00c1ngel Manuel Guerrero-Higueras, Irene Gonz\u00e1lez, and Francisco J. Rodr\u00edguez Lera.",
|
| 242 |
+
"venue": "In Hybrid Artificial Intelligent Systems, pages 533\u2013544, Cham, 2023. Springer Nature Switzerland.",
|
| 243 |
+
"url": null
|
| 244 |
+
}
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"12": {
|
| 248 |
+
"title": "Chain-of-thought prompting elicits reasoning in large language models.",
|
| 249 |
+
"author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al.",
|
| 250 |
+
"venue": "Advances in Neural Information Processing Systems, 35:24824\u201324837, 2022.",
|
| 251 |
+
"url": null
|
| 252 |
+
}
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"13": {
|
| 256 |
+
"title": "Large language models are zero-shot reasoners, 2023.",
|
| 257 |
+
"author": "Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa.",
|
| 258 |
+
"venue": null,
|
| 259 |
+
"url": null
|
| 260 |
+
}
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"14": {
|
| 264 |
+
"title": "Deliberation for autonomous robots: A survey.",
|
| 265 |
+
"author": "F\u00e9lix Ingrand and Malik Ghallab.",
|
| 266 |
+
"venue": "Artificial Intelligence, 247:10\u201344, 2017.",
|
| 267 |
+
"url": null
|
| 268 |
+
}
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"15": {
|
| 272 |
+
"title": "New approaches to robotics.",
|
| 273 |
+
"author": "Rodney A Brooks.",
|
| 274 |
+
"venue": "Science, 253(5025):1227\u20131232, 1991.",
|
| 275 |
+
"url": null
|
| 276 |
+
}
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"16": {
|
| 280 |
+
"title": "Experiences with an architecture for intelligent, reactive agents.",
|
| 281 |
+
"author": "R Peter Bonasso, R James Firby, Erann Gat, David Kortenkamp, David P Miller, and Mark G Slack.",
|
| 282 |
+
"venue": "Journal of Experimental & Theoretical Artificial Intelligence, 9(2-3):237\u2013256, 1997.",
|
| 283 |
+
"url": null
|
| 284 |
+
}
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"17": {
|
| 288 |
+
"title": "Aura: Principles and practice in review.",
|
| 289 |
+
"author": "Ronald C Arkin and Tucker Balch.",
|
| 290 |
+
"venue": "Journal of Experimental & Theoretical Artificial Intelligence, 9(2-3):175\u2013189, 1997.",
|
| 291 |
+
"url": null
|
| 292 |
+
}
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"18": {
|
| 296 |
+
"title": "llama_ros.",
|
| 297 |
+
"author": "Miguel \u00c1. Gonz\u00e1lez-Santamarta.",
|
| 298 |
+
"venue": "https://github.com/mgonzs13/llama_ros, April 2023.",
|
| 299 |
+
"url": null
|
| 300 |
+
}
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"19": {
|
| 304 |
+
"title": "Merlin2: Machined ros 2 planing.",
|
| 305 |
+
"author": "Miguel \u00c1. Gonz\u00e1lez-Santamarta, Francisco J. Rodr\u00edguez-Lera, Camino Fern\u00e1ndez-Llamas, and Vicente Matell\u00e1n-Olivera.",
|
| 306 |
+
"venue": "Software Impacts, 15:100477, 2023.",
|
| 307 |
+
"url": null
|
| 308 |
+
}
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"20": {
|
| 312 |
+
"title": "Cognitive robotics.",
|
| 313 |
+
"author": "Hector Levesque and Gerhard Lakemeyer.",
|
| 314 |
+
"venue": "Foundations of artificial intelligence, 3:869\u2013886, 2008.",
|
| 315 |
+
"url": null
|
| 316 |
+
}
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"21": {
|
| 320 |
+
"title": "A survey of cognitive architectures in the past 20 years.",
|
| 321 |
+
"author": "Peijun Ye, Tao Wang, and Fei-Yue Wang.",
|
| 322 |
+
"venue": "IEEE transactions on cybernetics, 48(12):3280\u20133290, 2018.",
|
| 323 |
+
"url": null
|
| 324 |
+
}
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"22": {
|
| 328 |
+
"title": "Gpt-4 technical report.",
|
| 329 |
+
"author": "OpenAI.",
|
| 330 |
+
"venue": "https://arxiv.org/abs/2303.08774, 2023.",
|
| 331 |
+
"url": null
|
| 332 |
+
}
|
| 333 |
+
},
|
| 334 |
+
{
|
| 335 |
+
"23": {
|
| 336 |
+
"title": "Llama: Open and efficient foundation language models, 2023.",
|
| 337 |
+
"author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, and Baptiste Rozi\u00e8re et. al.",
|
| 338 |
+
"venue": null,
|
| 339 |
+
"url": null
|
| 340 |
+
}
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"24": {
|
| 344 |
+
"title": "Llama 2: Open foundation and fine-tuned chat models, 2023.",
|
| 345 |
+
"author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, and Prajjwal Bhargava et. al.",
|
| 346 |
+
"venue": null,
|
| 347 |
+
"url": null
|
| 348 |
+
}
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"25": {
|
| 352 |
+
"title": "Integer quantization for deep learning inference: Principles and empirical evaluation.",
|
| 353 |
+
"author": "Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev, and Paulius Micikevicius.",
|
| 354 |
+
"venue": "arXiv preprint arXiv:2004.09602, 2020.",
|
| 355 |
+
"url": null
|
| 356 |
+
}
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"26": {
|
| 360 |
+
"title": "Deep learning with low precision by half-wave gaussian quantization.",
|
| 361 |
+
"author": "Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos.",
|
| 362 |
+
"venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5918\u20135926, 2017.",
|
| 363 |
+
"url": null
|
| 364 |
+
}
|
| 365 |
+
},
|
| 366 |
+
{
|
| 367 |
+
"27": {
|
| 368 |
+
"title": "Neural networks with few multiplications.",
|
| 369 |
+
"author": "Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio.",
|
| 370 |
+
"venue": "arXiv preprint arXiv:1510.03009, 2015.",
|
| 371 |
+
"url": null
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
{
|
| 375 |
+
"28": {
|
| 376 |
+
"title": "Fixed point quantization of deep convolutional networks.",
|
| 377 |
+
"author": "Darryl Lin, Sachin Talathi, and Sreekanth Annapureddy.",
|
| 378 |
+
"venue": "In International conference on machine learning, pages 2849\u20132858. PMLR, 2016.",
|
| 379 |
+
"url": null
|
| 380 |
+
}
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"29": {
|
| 384 |
+
"title": "https://github.com/ggerganov/llama.cpp, 2023.",
|
| 385 |
+
"author": "GitHub - ggerganov/llama.cpp: Port of Facebook\u2019s LLaMA model in C/C++ \u2014 github.com.",
|
| 386 |
+
"venue": null,
|
| 387 |
+
"url": null
|
| 388 |
+
}
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"30": {
|
| 392 |
+
"title": "Stanford alpaca: An instruction-following llama model.",
|
| 393 |
+
"author": "Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto.",
|
| 394 |
+
"venue": "https://github.com/tatsu-lab/stanford_alpaca, 2023.",
|
| 395 |
+
"url": null
|
| 396 |
+
}
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"31": {
|
| 400 |
+
"title": "Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.",
|
| 401 |
+
"author": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica.",
|
| 402 |
+
"venue": null,
|
| 403 |
+
"url": null
|
| 404 |
+
}
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"32": {
|
| 408 |
+
"title": "Wizardlm: Empowering large language models to follow complex instructions, 2023.",
|
| 409 |
+
"author": "Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang.",
|
| 410 |
+
"venue": null,
|
| 411 |
+
"url": null
|
| 412 |
+
}
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"33": {
|
| 416 |
+
"title": "https://huggingface.co/NousResearch/Nous-Hermes-13b.",
|
| 417 |
+
"author": "NousResearch/Nous-Hermes-13b Hugging Face \u2014 huggingface.co.",
|
| 418 |
+
"venue": "[Accessed 11-09-2023].",
|
| 419 |
+
"url": null
|
| 420 |
+
}
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"34": {
|
| 424 |
+
"title": "https://huggingface.co/AIDC-ai-business/Marcoroni-13B.",
|
| 425 |
+
"author": "AIDC-ai-business/Marcoroni-13B Hugging Face \u2014 huggingface.co.",
|
| 426 |
+
"venue": "[Accessed 13-09-2023].",
|
| 427 |
+
"url": null
|
| 428 |
+
}
|
| 429 |
+
},
|
| 430 |
+
{
|
| 431 |
+
"35": {
|
| 432 |
+
"title": "Leveraging pre-trained large language models to construct and utilize world models for model-based task planning, 2023.",
|
| 433 |
+
"author": "Lin Guan, Karthik Valmeekam, Sarath Sreedharan, and Subbarao Kambhampati.",
|
| 434 |
+
"venue": null,
|
| 435 |
+
"url": null
|
| 436 |
+
}
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"36": {
|
| 440 |
+
"title": "Llm-planner: Few-shot grounded planning for embodied agents with large language models, 2023.",
|
| 441 |
+
"author": "Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M. Sadler, Wei-Lun Chao, and Yu Su.",
|
| 442 |
+
"venue": null,
|
| 443 |
+
"url": null
|
| 444 |
+
}
|
| 445 |
+
},
|
| 446 |
+
{
|
| 447 |
+
"37": {
|
| 448 |
+
"title": "Tree of thoughts: Deliberate problem solving with large language models.",
|
| 449 |
+
"author": "Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan.",
|
| 450 |
+
"venue": "arXiv preprint arXiv:2305.10601, 2023.",
|
| 451 |
+
"url": null
|
| 452 |
+
}
|
| 453 |
+
},
|
| 454 |
+
{
|
| 455 |
+
"38": {
|
| 456 |
+
"title": "Graph of thoughts: Solving elaborate problems with large language models.",
|
| 457 |
+
"author": "Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al.",
|
| 458 |
+
"venue": "arXiv preprint arXiv:2308.09687, 2023.",
|
| 459 |
+
"url": null
|
| 460 |
+
}
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"39": {
|
| 464 |
+
"title": "Pddl planning with pretrained large language models.",
|
| 465 |
+
"author": "Tom Silver, Varun Hariprasad, Reece S Shuttleworth, Nishanth Kumar, Tom\u00e1s Lozano-P\u00e9rez, and Leslie Pack Kaelbling.",
|
| 466 |
+
"venue": "In NeurIPS 2022 Foundation Models for Decision Making Workshop, 2022.",
|
| 467 |
+
"url": null
|
| 468 |
+
}
|
| 469 |
+
},
|
| 470 |
+
{
|
| 471 |
+
"40": {
|
| 472 |
+
"title": "Progprompt: program generation for situated robot task planning using large language models.",
|
| 473 |
+
"author": "Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg.",
|
| 474 |
+
"venue": "Autonomous Robots, pages 1\u201314, 2023.",
|
| 475 |
+
"url": null
|
| 476 |
+
}
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"41": {
|
| 480 |
+
"title": "40 years of cognitive architectures: core cognitive abilities and practical applications.",
|
| 481 |
+
"author": "Iuliia Kotseruba and John K Tsotsos.",
|
| 482 |
+
"venue": "Artificial Intelligence Review, 53(1):17\u201394, 2020.",
|
| 483 |
+
"url": null
|
| 484 |
+
}
|
| 485 |
+
},
|
| 486 |
+
{
|
| 487 |
+
"42": {
|
| 488 |
+
"title": "Himop: a three-component architecture to create more human-acceptable social-assistive robots: motivational architecture for assistive robots.",
|
| 489 |
+
"author": "Francisco J Rodr\u00edguez-Lera, Vicente Matell\u00e1n-Olivera, Miguel \u00c1 Conde-Gonz\u00e1lez, and Francisco Mart\u00edn-Rico.",
|
| 490 |
+
"venue": "Cognitive processing, 19:233\u2013244, 2018.",
|
| 491 |
+
"url": null
|
| 492 |
+
}
|
| 493 |
+
},
|
| 494 |
+
{
|
| 495 |
+
"43": {
|
| 496 |
+
"title": "Pddl2.1: An extension to pddl for expressing temporal planning domains.",
|
| 497 |
+
"author": "Maria Fox and Derek Long.",
|
| 498 |
+
"venue": "J. Artif. Intell. Res. (JAIR), 20:61\u2013124, 12 2003.",
|
| 499 |
+
"url": null
|
| 500 |
+
}
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"44": {
|
| 504 |
+
"title": "Mobar: a hierarchical action-oriented autonomous control architecture.",
|
| 505 |
+
"author": "Pablo Mu\u00f1oz, Mar\u00eda D R-Moreno, David F Barrero, and Fernando Ropero.",
|
| 506 |
+
"venue": "Journal of Intelligent & Robotic Systems, 94:745\u2013760, 2019.",
|
| 507 |
+
"url": null
|
| 508 |
+
}
|
| 509 |
+
},
|
| 510 |
+
{
|
| 511 |
+
"45": {
|
| 512 |
+
"title": "The cortex cognitive robotics architecture: Use cases.",
|
| 513 |
+
"author": "Pablo Bustos, Luis Jes\u00fas Manso, Antonio J Bandera, Juan P Bandera, Ismael Garcia-Varea, and Jesus Martinez-Gomez.",
|
| 514 |
+
"venue": "Cognitive systems research, 55:107\u2013123, 2019.",
|
| 515 |
+
"url": null
|
| 516 |
+
}
|
| 517 |
+
},
|
| 518 |
+
{
|
| 519 |
+
"46": {
|
| 520 |
+
"title": "Depicting probabilistic context awareness knowledge in deliberative architectures.",
|
| 521 |
+
"author": "Jonatan Gin\u00e9s, Francisco J Rodr\u00edguez-Lera, Francisco Mart\u00edn, \u00c1ngel Manuel Guerrero, and Vicente Matell\u00e1n.",
|
| 522 |
+
"venue": "Natural Computing, pages 1\u201312, 2022.",
|
| 523 |
+
"url": null
|
| 524 |
+
}
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"47": {
|
| 528 |
+
"title": "Client-server approach for managing visual attention, integrated in a cognitive architecture for a social robot.",
|
| 529 |
+
"author": "Francisco Mart\u00edn, Jonatan Gin\u00e9s, Francisco J Rodr\u00edguez-Lera, Angel M Guerrero-Higueras, and Vicente Matell\u00e1n Olivera.",
|
| 530 |
+
"venue": "Frontiers in Neurorobotics, 15:630386, 2021.",
|
| 531 |
+
"url": null
|
| 532 |
+
}
|
| 533 |
+
},
|
| 534 |
+
{
|
| 535 |
+
"48": {
|
| 536 |
+
"title": "On the implementation of behavior trees in robotics.",
|
| 537 |
+
"author": "Michele Colledanchise and Lorenzo Natale.",
|
| 538 |
+
"venue": "IEEE Robotics and Automation Letters, 6(3):5929\u20135936, 2021.",
|
| 539 |
+
"url": null
|
| 540 |
+
}
|
| 541 |
+
},
|
| 542 |
+
{
|
| 543 |
+
"49": {
|
| 544 |
+
"title": "Robot operating system 2: Design, architecture, and uses in the wild.",
|
| 545 |
+
"author": "Steven Macenski, Tully Foote, Brian Gerkey, Chris Lalancette, and William Woodall.",
|
| 546 |
+
"venue": "Science Robotics, 7(66):eabm6074, 2022.",
|
| 547 |
+
"url": null
|
| 548 |
+
}
|
| 549 |
+
},
|
| 550 |
+
{
|
| 551 |
+
"50": {
|
| 552 |
+
"title": "A systematic survey of prompt engineering in large language models: Techniques and applications.",
|
| 553 |
+
"author": "Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, and Aman Chadha.",
|
| 554 |
+
"venue": "arXiv preprint arXiv:2402.07927, 2024.",
|
| 555 |
+
"url": null
|
| 556 |
+
}
|
| 557 |
+
},
|
| 558 |
+
{
|
| 559 |
+
"51": {
|
| 560 |
+
"title": "Retrieval-augmented generation for knowledge-intensive nlp tasks.",
|
| 561 |
+
"author": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rockt\u00e4schel, et al.",
|
| 562 |
+
"venue": "Advances in Neural Information Processing Systems, 33:9459\u20139474, 2020.",
|
| 563 |
+
"url": null
|
| 564 |
+
}
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"52": {
|
| 568 |
+
"title": "LangChain.",
|
| 569 |
+
"author": "Harrison Chase.",
|
| 570 |
+
"venue": "https://github.com/hwchase17/langchain, October 2022.",
|
| 571 |
+
"url": null
|
| 572 |
+
}
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"53": {
|
| 576 |
+
"title": "Merlin a cognitive architecture for service robots.",
|
| 577 |
+
"author": "Miguel \u00c1 Gonz\u00e1lez-Santamarta, Francisco J Rodr\u00edguez-Lera, Claudia \u00c1lvarez-Aparicio, \u00c1ngel M Guerrero-Higueras, and Camino Fern\u00e1ndez-Llamas.",
|
| 578 |
+
"venue": "Applied Sciences, 10(17):5989, 2020.",
|
| 579 |
+
"url": null
|
| 580 |
+
}
|
| 581 |
+
},
|
| 582 |
+
{
|
| 583 |
+
"54": {
|
| 584 |
+
"title": "Forward-chaining partial-order planning.",
|
| 585 |
+
"author": "Amanda Coles, Andrew Coles, Maria Fox, and Derek Long.",
|
| 586 |
+
"venue": "In ICAPS 2010 - Proceedings of the 20th International Conference on Automated Planning and Scheduling, pages 42\u201349, 01 2010.",
|
| 587 |
+
"url": null
|
| 588 |
+
}
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"55": {
|
| 592 |
+
"title": "YASMIN: Yet another state machine.",
|
| 593 |
+
"author": "Miguel \u00c1. Gonz\u00e1lez-Santamarta, Francisco J. Rodr\u00edguez-Lera, Vicente Matell\u00e1n-Olivera, and Camino Fern\u00e1ndez-Llamas.",
|
| 594 |
+
"venue": "In Danilo Tardioli, Vicente Matell\u00e1n, Guillermo Heredia, Manuel F. Silva, and Lino Marques, editors, ROBOT2022: Fifth Iberian Robotics Conference, pages 528\u2013539, Cham, 2023. Springer International Publishing.",
|
| 595 |
+
"url": null
|
| 596 |
+
}
|
| 597 |
+
},
|
| 598 |
+
{
|
| 599 |
+
"56": {
|
| 600 |
+
"title": "https://www.trychroma.com/.",
|
| 601 |
+
"author": "the AI-native open-source embedding database \u2014 trychroma.com.",
|
| 602 |
+
"venue": "[Accessed 13-09-2023].",
|
| 603 |
+
"url": null
|
| 604 |
+
}
|
| 605 |
+
},
|
| 606 |
+
{
|
| 607 |
+
"57": {
|
| 608 |
+
"title": "Grammatical evolution.",
|
| 609 |
+
"author": "Michael O\u2019Neill, Conor Ryan, Michael O\u2019Neil, and Conor Ryan.",
|
| 610 |
+
"venue": "Springer, 2003.",
|
| 611 |
+
"url": null
|
| 612 |
+
}
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"58": {
|
| 616 |
+
"title": "The marathon 2: A navigation system.",
|
| 617 |
+
"author": "Steve Macenski, Francisco Mart\u00edn, Ruffin White, and Jonatan Gin\u00e9s Clavero.",
|
| 618 |
+
"venue": "In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020.",
|
| 619 |
+
"url": null
|
| 620 |
+
}
|
| 621 |
+
}
|
| 622 |
+
],
|
| 623 |
+
"url": "http://arxiv.org/html/2309.14945v2"
|
| 624 |
+
}
|
20240323/2310.05261v2.json
ADDED
|
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Time-Varying Soft-Maximum Control Barrier Functions for Safety in an A Priori Unknown Environment",
|
| 3 |
+
"abstract": "This paper presents a time-varying soft-maximum composite control barrier function (CBF) that can be used to ensure safety in an a priori unknown environment, where local perception information regarding the safe set is periodically obtained.\nWe consider the scenario where the periodically obtained perception feedback can be used to construct a local CBF that models a local subset of the unknown safe set.\nThen, we use a novel smooth time-varying soft-maximum function to compose the most recently obtained local CBFs into a single CBF.\nThis composite CBF models an approximate union of the most recently obtained local subsets of the safe set.\nNotably, this composite CBF can have arbitrary relative degree .\nNext, this composite CBF is used as a th-order CBF constraint in a real-time optimization to determine a control that minimizes a quadratic cost while guaranteeing that the state stays in a time-varying subset of the unknown safe set.\nWe also present an application of the time-varying soft-maximum composite CBF method to a nonholonomic ground robot with nonnegligible inertia.\nIn this application, we present a simple approach to generate the local CBFs from the periodically obtained perception data.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Safe autonomous robotic navigation in an a priori unmapped environment has application in a variety of domains including search and rescue [1 ###reference_b1###], environmental monitoring [2 ###reference_b2###], and transportation [3 ###reference_b3###].\nA techniques for safe navigation include potential field methods[4 ###reference_b4###, 5 ###reference_b5###], collision cones [6 ###reference_b6###], reachable sets [7 ###reference_b7###], and barrier function approaches [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###].\nControl barrier functions (CBFs) provide a set-theoretic method to obtain forward invariance (e.g., safety) with respect to a specified safe set [15 ###reference_b15###].\nCBFs can be implemented as constraints in real-time optimization-based control methods (e.g., quadratic programs) in order to guarantee forward invariance and thus, safety [14 ###reference_b14###].\nA variety of extensions have been developed recently.\nFor example, high-relative degree CBF methods (e.g., [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]); CBFs for discrete time [19 ###reference_b19###]; and CBFs with time variation or adaptation (e.g., [20 ###reference_b20###, 21 ###reference_b21###]).\nTraditionally, CBFs are assumed to be given offline, that is, constructed offline using a priori known information regarding the desired safe set.\nHowever, in situations where the environment is unknown or changing, online construction of valid CBFs could enable safe navigation.\nIn this scenario, the objective is to construct a valid CBF in real time based on the current state of the system (e.g., robot) as well as information gathered from the environment (e.g., perception information).\nFor example, [22 ###reference_b22###] uses a support vector machine classifier to build a barrier function using safe and unsafe samples from LiDAR measurements.\nAs another example, [23 ###reference_b23###] synthesizes a barrier function using a deep neural network trained with sensor data.\nHowever, when new sensor data is obtained, the barrier function model must be updated, which typically results in discontinuities that can be problematic for ensuring forward invariance and thus, safety.\nDiscontinuities that arise when new data is available is only one challenge in online construction of a valid CBF.\nAnother challenge is that it can be difficult to synthesize a single valid CBF that models a complex environment.\nThus, there is interest in composing a single valid CBF from multiple CBFs.\nApproaches to compose CBFs include [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###].\nFor example, [25 ###reference_b25###] uses Boolean compositions, which result in nonsmooth barrier functions that use the minimum and maximum functions.\nThese nonsmooth barrier functions are required to be relative degree one.\nThis nonsmooth barrier function approach is extended in [26 ###reference_b26###] to allow not only for a nonsmooth composition (e.g., minimum and maximum) but also for time variation, specifically, periodic jumps in the barrier function.\nThis extension can be useful for addressing the discontinuities that arise when new information (e.g., perception data) is used to update the barrier function.\nHowever, this approach is not applicable for relative degree greater than one.\nThus, [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###] cannot be directly applied to higher-relative degree situations such as ground robots with nonnegligible inertia where the safe set is based on position (e.g., avoiding obstacles), or unmanned aerial vehicles with position-based safe sets.\nIn contrast to the nonsmooth composition methods, [27 ###reference_b27###, 28 ###reference_b28###] use smooth soft-minimum and soft-maximum functions for composing a single barrier function from multiple barrier functions.\nHowever, [27 ###reference_b27###, 28 ###reference_b28###] does not address online CBF construction, specifically, updating the CBF based on real-time measurements.\nThis paper presents several new contributions.\nFirst, we present a new time-varying soft-maximum composite CBF construction that can have arbitrary relative degree with respect to the system dynamics and allows for safety in an a priori unmapped environment.\nWe consider the scenario where perception feedback information is periodically obtained and used to construct a local CBF, that is, a CBF that models a local subset of the unknown safe set.\nThen, we use a soft-maximum function to compose a single CBF from the most recently obtained local CBFs.\nNotably, this composition uses not only the soft maximum but also time variation, specifically, a homotopy in time to smoothly move the oldest local CBF out of the soft maximum and newest local CBF into the soft maximum.\nIn fact, this homotopy is sufficiently smooth to ensure that the composite soft-maximum CBF is -times continuously differentiable in time and in the state.\nThus, this composite CBF models an approximate union of the most recently obtained local subsets of the safe set.\nNext, we use this composite CBF as a th-order CBF constraint in a real-time-optimization control that aims to find an instantaneously optimal control while guaranteeing that the state stays in a subset of the unknown safe set.\nWe also present an application of the time-varying soft-maximum composite CBF method to a nonholonomic ground robot with nonnegligible inertia.\nThis robot is equipped with sensing capability (e.g., LiDAR) that periodically detects points on objects (i.e., obstacles) that are near the robot.\nThe robot has the objective of safely navigating the unknown environment.\nWe present a simple approach to generate local CBFs from the perception data (e.g., LiDAR points).\nSpecifically, we construct an ellipsoidal barrier function for each of the detected point, where the semi-major axis of the ellipsoid stretches from the detected point to the range of the sensor and the -superlevel set models that area outside the ellipsoid.\nThus, the -sublevel set of each ellipsoid is a region that we want to avoid because it is not yet known to be safe.\nWe use a soft-minimum function to compose the ellipsoidal barrier function and thus, model a local set that is known to be a subset of the safe set from the perception data.\nWe then use the most recently obtained composite soft-minimum CBFs (each one models a local subset of the unknown safe set) in the time-varying soft-maximum barrier function method."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Notation",
|
| 15 |
+
"text": "Let be continuously differentiable. The Lie derivative of along the vector fields of is define as\nLet , and consider the functions and defined by\nwhich are the soft minimum and soft maximum, respectively.\nThe next result relates the soft minimum to the minimum and the soft maximum to the maximum.\nLet .\nThen,\nand\nFact 1 ###reference_t1### shows that as , converges to the minimum and converges to the maximum."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Problem Formulation",
|
| 21 |
+
"text": "Consider\nwhere is the state, is the input, is the initial condition, and and are locally Lipschitz continuous.\nLet , and define the safe set\nwhich is not assumed to be known.\nLet , and assume that for all , we obtain perception feedback at time in the form of a function such that:\n.\nIf , then .\nThere exist a positive integer such that is -times continuously differentiable, and for all , and .\nNote that the perception information can be obtained from a variety of CBF synthesis methods (e.g., [22 ###reference_b22###, 23 ###reference_b23###]).\nNext, consider the desired control .\nThe objective is to design a full-state feedback control such that for all , is minimized subject to the safety constraint that .\nAll statements in this paper that involve the subscript are for all ."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV Time-Varying CBF for Unknown Safe Set",
|
| 27 |
+
"text": "Let be -times continuously differentiable such that the following condition hold:\nFor all , .\nFor all , .\nFor all , .\nFor all , .\nThe following example provides one possible choice of that satisfies (C1) ###reference_i1###\u2013(C4) ###reference_i4###.\nLet , and consider , defined by\nFigure 1 ###reference_### is a plot of given by 5 ###reference_### for different value of .\n###figure_1### Let be a positive integer, and consider such that for all ,\nNote that is constructed from the most recently obtained perception feedback functions .\nThe next result demonstrates that is -times continuously differentiable. T\nhe proof is omitted for space considerations.\nThe function is -times continuously differentiable with respect to and .\nNext, we define the -superlevel set of .\nSpecifically, consider defined by\nThe next result shows that , which is constructed from real-time perception data, is a subset of the safe set\nFor all .\nFor , let be a extended class function that is -times continuously differentiable.\nNext, consider defined by , and for , consider defined by\nNext, we design a control that for all , seeks to minimize subject to the safety constraint that .\nFor all and all , the control is given by\nTo analyze the control Sections IV ###reference_###, 7 ###reference_###, 8 ###reference_###, and IV ###reference_###, for all define\nand\nThe next result is the main result on the control Sections IV ###reference_###, 7 ###reference_###, 8 ###reference_###, and IV ###reference_###, which is constructed from real-time perception data .\nThis result shows that the control guarantees safety and follows from [18 ###reference_b18###, Theorem 3].\nConsider (3 ###reference_###), where (A1) ###reference_i1###\u2013(A3) ###reference_i3### are satisfied, and consider given by Sections IV ###reference_###, 7 ###reference_###, 8 ###reference_###, and IV ###reference_###.\nAssume that for all and all , .\nLet .\nThen, for all , ."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Application to a Nonholonomic Ground Robot",
|
| 33 |
+
"text": "Consider the nonholonomic ground robot modeled by (3 ###reference_###), where\nand is the robot\u2019s position in an orthogonal coordinate frame, is the speed, and is the direction of the velocity vector.\nLet be the goal location, which is denoted by .\nThe desired control is\nwhere\nand , , and .\nThis desired control is designed using a process similar to [29 ###reference_b29###, pp. 30\u201331].\nNext, define , which extracts the position elements of the state.\nThe robot is equipped with sensing capability (e.g., LiDAR) that can detect points on objects that are inside an detection radius of the robot\u2019s position.\nAt time , the robot obtains the detection points , which are the polar coordinates of detected objects relative to the robot\u2019s position , where and .\nIf no object is detected at angle , then .\nNext, for each detected point, we define a function whose -level set is an ellipse that contains the detected point and extends from the detected point to the edge of the detection radius.\nSpecifically, for , consider defined by\nwhere\nand is the length of the semi-major axis, is the length of the semi-minor axis, and is a safety margin.\nNote that the -superlevel set of is the area outside the ellipse that contains the th detected point.\nFor the examples in this section, we let , , and .\nWe consider two cases for sensing capability: (1) a field of view, and (2) a limited (e.g., ) field of view."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "5.1",
|
| 37 |
+
"parent_section_id": "5",
|
| 38 |
+
"section_name": " Field of View",
|
| 39 |
+
"text": "We now use the ellipses , which are generated from the raw perception data, to construct the perception feedback function .\nWe use a soft minimum to combine the ellipses and construct the perception feedback function .\nSpecifically, we let\nFigure 2 ###reference_### shows a map of the a priori unknown environment that the ground robot must navigate.\nThe figure also shows the -level sets of , which are constructed from the raw perception data at s.\nNotice that each ellipse contains a detected point on the obstacles, and the detected point is on the interior of the ellipse because the safety margin which makes the length of the semi-major axis greater than the distance from the detected point to the boundary of the detection radius.\nWe implement the soft-maximum CBF control Sections IV ###reference_###, 7 ###reference_###, 8 ###reference_###, and IV ###reference_### is implemented with , , , , , , , and given by Example 1 ###reference_mple1### where and .\nFigure 3 ###reference_### shows the map and closed-loop trajectories for with 3 different goals locations , , and .\nIn all cases, the robot position converges to the goal location while satisfying safety constraints.\nFigures 4 ###reference_### and 5 ###reference_### provide time histories of relevant signals for the case where . Figure 4 ###reference_### shows and , which are always positive, indicating that the safety constraint is satisfied.\nFigure 5 ###reference_### shows , , , , , and .\nNote that the control Sections IV ###reference_###, 7 ###reference_###, 8 ###reference_###, and IV ###reference_### modifies the desired control signals to avoid collisions with detected obstacles.\n###figure_2### ###figure_3### ###figure_4### ###figure_5###"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "5.2",
|
| 43 |
+
"parent_section_id": "5",
|
| 44 |
+
"section_name": "Limited Field of View",
|
| 45 |
+
"text": "Next, we consider the scenario where the robot\u2019s sensing capability has a limited field of view (i.e., ) as shown in Figure 6 ###reference_###.\nTo model the limited field of view, we sample points along the boundary of the field of view at time .\nFor each , we construct an ellipse denoted by using (12 ###reference_###) that contains the point on the boundary and goes to the maximum range .\nWe now use the ellipses , which are generated from the raw perception data, and ellipses , which generated from the field of view boundary, to construct the perception feedback function .\nWe use a soft minimum to combine the ellipses and construct the perception feedback function .\nSpecifically, we let\nFigure 7 ###reference_### shows a map of the a priori unknown environment that the ground robot must navigate.\nThe figure also shows the -level sets of and , which are constructed from the raw perception data and boundary of the field of view obtained at s.\nThe soft-maximum CBF control Sections IV ###reference_###, 7 ###reference_###, 8 ###reference_###, and IV ###reference_### is implemented with , , , , , , , , and given by Example 1 ###reference_mple1###, where and .\n###figure_6### Figure 8 ###reference_### shows the map and closed-loop trajectories for with 3 different goals locations , , and .\nIn all cases, the robot converges to the goal location while satisfying safety.\nFor the case where , Figures 9 ###reference_### and 10 ###reference_### provide time histories of relevant signals.\nFigure 9 ###reference_### shows that and are always positive, and Figure 10 ###reference_### shows , , , , , and .\n###figure_7### ###figure_8### ###figure_9### ###figure_10###"
|
| 46 |
+
}
|
| 47 |
+
],
|
| 48 |
+
"appendix": [],
|
| 49 |
+
"tables": {},
|
| 50 |
+
"image_paths": {
|
| 51 |
+
"1": {
|
| 52 |
+
"figure_path": "2310.05261v2_figure_1.png",
|
| 53 |
+
"caption": "Figure 1: Example of \u03b7\ud835\udf02\\etaitalic_\u03b7.",
|
| 54 |
+
"url": "http://arxiv.org/html/2310.05261v2/x1.png"
|
| 55 |
+
},
|
| 56 |
+
"2": {
|
| 57 |
+
"figure_path": "2310.05261v2_figure_2.png",
|
| 58 |
+
"caption": "Figure 2: Map of a priori unknown environment.",
|
| 59 |
+
"url": "http://arxiv.org/html/2310.05261v2/x2.png"
|
| 60 |
+
},
|
| 61 |
+
"3": {
|
| 62 |
+
"figure_path": "2310.05261v2_figure_3.png",
|
| 63 |
+
"caption": "Figure 3: Three closed-loop trajectories with 360\u2218superscript360360^{\\circ}360 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT LiDAR.",
|
| 64 |
+
"url": "http://arxiv.org/html/2310.05261v2/x3.png"
|
| 65 |
+
},
|
| 66 |
+
"4": {
|
| 67 |
+
"figure_path": "2310.05261v2_figure_4.png",
|
| 68 |
+
"caption": "Figure 4: h\u210ehitalic_h and \u03c81subscript\ud835\udf131\\psi_{1}italic_\u03c8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT for qg=[\u2009135]Tsubscript\ud835\udc5egsuperscript135Tq_{\\rm g}=[\\,13\\quad 5\\,]^{\\rm T}italic_q start_POSTSUBSCRIPT roman_g end_POSTSUBSCRIPT = [ 13 5 ] start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT .",
|
| 69 |
+
"url": "http://arxiv.org/html/2310.05261v2/x4.png"
|
| 70 |
+
},
|
| 71 |
+
"5": {
|
| 72 |
+
"figure_path": "2310.05261v2_figure_5.png",
|
| 73 |
+
"caption": "Figure 5: qxsubscript\ud835\udc5e\ud835\udc65q_{x}italic_q start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT, qysubscript\ud835\udc5e\ud835\udc66q_{y}italic_q start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT, v\ud835\udc63vitalic_v, \u03b8\ud835\udf03\\thetaitalic_\u03b8, udsubscript\ud835\udc62du_{\\rm d}italic_u start_POSTSUBSCRIPT roman_d end_POSTSUBSCRIPT, and u\ud835\udc62uitalic_u for qg=[\u2009135]Tsubscript\ud835\udc5egsuperscript135Tq_{\\rm g}=[\\,13\\quad 5\\,]^{\\rm T}italic_q start_POSTSUBSCRIPT roman_g end_POSTSUBSCRIPT = [ 13 5 ] start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT.",
|
| 74 |
+
"url": "http://arxiv.org/html/2310.05261v2/x5.png"
|
| 75 |
+
},
|
| 76 |
+
"6": {
|
| 77 |
+
"figure_path": "2310.05261v2_figure_6.png",
|
| 78 |
+
"caption": "Figure 6: Boundry of the 100\u2218superscript100100^{\\circ}100 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT field of view at t=k\u2062Ts\ud835\udc61\ud835\udc58subscript\ud835\udc47st=kT_{\\rm s}italic_t = italic_k italic_T start_POSTSUBSCRIPT roman_s end_POSTSUBSCRIPT",
|
| 79 |
+
"url": "http://arxiv.org/html/2310.05261v2/x6.png"
|
| 80 |
+
},
|
| 81 |
+
"7": {
|
| 82 |
+
"figure_path": "2310.05261v2_figure_7.png",
|
| 83 |
+
"caption": "Figure 7: Map of a priori unknown environment.",
|
| 84 |
+
"url": "http://arxiv.org/html/2310.05261v2/x7.png"
|
| 85 |
+
},
|
| 86 |
+
"8": {
|
| 87 |
+
"figure_path": "2310.05261v2_figure_8.png",
|
| 88 |
+
"caption": "Figure 8: Three closed-loop trajectories with 100\u2218superscript100100^{\\circ}100 start_POSTSUPERSCRIPT \u2218 end_POSTSUPERSCRIPT LiDAR.",
|
| 89 |
+
"url": "http://arxiv.org/html/2310.05261v2/x8.png"
|
| 90 |
+
},
|
| 91 |
+
"9": {
|
| 92 |
+
"figure_path": "2310.05261v2_figure_9.png",
|
| 93 |
+
"caption": "Figure 9: h\u210ehitalic_h and \u03c81subscript\ud835\udf131\\psi_{1}italic_\u03c8 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT for qg=[\u2009135]Tsubscript\ud835\udc5egsuperscript135Tq_{\\rm g}=[\\,13\\quad 5\\,]^{\\rm T}italic_q start_POSTSUBSCRIPT roman_g end_POSTSUBSCRIPT = [ 13 5 ] start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT .",
|
| 94 |
+
"url": "http://arxiv.org/html/2310.05261v2/x9.png"
|
| 95 |
+
},
|
| 96 |
+
"10": {
|
| 97 |
+
"figure_path": "2310.05261v2_figure_10.png",
|
| 98 |
+
"caption": "Figure 10: qxsubscript\ud835\udc5e\ud835\udc65q_{x}italic_q start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT, qysubscript\ud835\udc5e\ud835\udc66q_{y}italic_q start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT, v\ud835\udc63vitalic_v, \u03b8\ud835\udf03\\thetaitalic_\u03b8, udsubscript\ud835\udc62du_{\\rm d}italic_u start_POSTSUBSCRIPT roman_d end_POSTSUBSCRIPT, and u\ud835\udc62uitalic_u for qg=[\u2009135]Tsubscript\ud835\udc5egsuperscript135Tq_{\\rm g}=[\\,13\\quad 5\\,]^{\\rm T}italic_q start_POSTSUBSCRIPT roman_g end_POSTSUBSCRIPT = [ 13 5 ] start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT.",
|
| 99 |
+
"url": "http://arxiv.org/html/2310.05261v2/x10.png"
|
| 100 |
+
}
|
| 101 |
+
},
|
| 102 |
+
"validation": true,
|
| 103 |
+
"references": [],
|
| 104 |
+
"url": "http://arxiv.org/html/2310.05261v2"
|
| 105 |
+
}
|
20240323/2310.08446v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2310.09725v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2310.15106v4.json
ADDED
|
@@ -0,0 +1,503 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Theoretical Analysis of the Radio Map Estimation ProblemThis research has been funded in part by the Research Council of Norway under IKTPLUSS grant 311994.",
|
| 3 |
+
"abstract": "Radio maps provide radio frequency metrics, such as the received signal\nstrength, at every location of a geographic area. These maps, which are\nestimated using a set of measurements collected at multiple positions, find a\nwide range of applications in wireless communications, including the prediction\nof coverage holes, network planning, resource allocation, and path planning for\nmobile robots. Although a vast number of estimators have been proposed, the\ntheoretical understanding of the radio map estimation (RME) problem has not been\naddressed. The present work aims at filling this gap along two directions.\nFirst, the complexity of the set of radio map functions is quantified by means\nof lower and upper bounds on their spatial variability, which offers valuable\ninsight into the required spatial distribution of measurements and the\nestimators that can be used. Second, the reconstruction error for power maps in\nfree space is upper bounded for three conventional spatial interpolators. The\nproximity coefficient, which is a decreasing function of the distance from the\ntransmitters to the mapped region, is proposed to quantify the complexity of the\nRME problem. Numerical experiments assess the tightness of the obtained bounds\nand the validity of the main takeaways in complex environments.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Radio maps, also known as radio environment maps, provide a radio frequency (RF) metric of interest across a geographical region [1 ###reference_b1###].\nFor example, in power maps, which constitute a prominent example of radio maps, the metric of interest is the power that a sensor would measure when placed at each location. An example of a power map constructed with real data is shown in Fig. 1 ###reference_###.\nOther examples of RF metrics include the received power spectral density (PSD), outage probability, and channel gain.\nRadio maps are of interest in a large number of applications such as cellular communications, device-to-device communications, network planning, frequency planning, robot path planning, dynamic spectrum access, aerial traffic management in unmanned aerial systems, fingerprinting localization, and so on; see e.g. [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 1 ###reference_b1###] and references therein.\nA recently popular application of power maps is to determine how the coverage of a cellular or broadcast network can be improved by deploying new base stations or relays, either terrestrial or aerial [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###].\nIn radio map estimation (RME), a radio map is constructed using measurements collected across the area of interest.\nMany estimators have been proposed in the literature, mostly based on some form of interpolation or regression. By far, power maps are the radio maps that garnered most interest.\nOne of the simplest kinds of estimators relies on kernel-based learning (see [9 ###reference_b9###] and references therein), which overcome the limitations of (the simpler) parametric estimators [1 ###reference_b1###, Sec. \u201cLinear Parametric RME\u201d].\nOther popular estimators are based on Kriging [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###],\nsparsity-based inference [13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###],\nmatrix completion [16 ###reference_b16###, 17 ###reference_b17###],\ndictionary learning [18 ###reference_b18###], and graphical models [19 ###reference_b19###].\nThe most recent trend capitalizes on deep neural networks; see e.g. [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###].\nNote that the aforementioned list of works is not exhaustive due to space limitations. For a more comprehensive list of references, see [1 ###reference_b1###].\nDespite the large\nvolume of research in this area, the vast majority of works adhere to a\ncommon profile: they propose an estimator and validate it with synthetic\ndata generated using a statistical propagation model or with ray-tracing software. A small number of works utilize also real data [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 27 ###reference_b27###, 28 ###reference_b28###].\nHowever, no theoretical analysis on the fundamental aspects of the RME problem as well as on the performance of estimation algorithms has been carried out.\nIndeed, the most related work in this context is two-fold.\nOn the one hand, the estimation error of some schemes can be derived if the field of interest adheres to a certain model [12 ###reference_b12###, 20 ###reference_b20###]. However, these models are generic, not necessarily accurate for radio maps.\nOn the other hand, the wave theory of information (WTI) studied the problem of reconstructing the electromagnetic field across space and time using arrays of synchronized sensors [29 ###reference_b29###]. Nonetheless, this problem is fundamentally different from RME, where sensors are not typically synchronized, the metrics of interest involve temporal averages of the electromagnetic field, and\nthe targeted spatial resolution is much lower.\n###figure_1### This paper111A conference version of this paper was submitted to the IEEE Vehicular Technology Conference, Spring 2024. Relative to that paper, the present one considers also 2D maps, contains Theorem 1 ###reference_heorem1###, Corollary 1 ###reference_orollary1###, Corollary 2 ###reference_orollary2###, the analysis of the reconstruction using sinc interpolation, the proofs of all results, and further discussions and numerical experiments.\n takes a step to address this gap by means of a quantitative theoretical analysis of the RME problem.\nIn particular, the difficulty of the RME problem is first assessed by analyzing the spatial variability of power maps. An important finding in this context is that the spatial variations of power maps in free space are relatively slow. Most of their energy is concentrated at low spatial frequencies, which motivates estimators based on this property.\nSecond, the estimation performance of zeroth-order, first-order, and sinc interpolators is quantified in terms of , , and error metrics. Many of these bounds turn out to be proportional to a quantity referred to as the proximity coefficient, which is directly proportional to the transmitted power and inversely proportional to the cube of the distance from the transmitters to the mapped region. As a result, the analysis reveals that a larger spatial density of measurements is required when the sources are closer to the mapped region. Error analysis of the sinc interpolator yields bounds with a faster decay rate than zeroth- and first-order interpolators, but the latter are seen to be preferable in practice, where the number of samples is finite.\nFinally, although the aforementioned results assume free-space\npropagation, their generalization to more involved propagation\nphenomena is briefly addressed, both theoretically and by means of a\nnumerical experiment.\nThe rest of the paper is structured as follows. Sec. II ###reference_### formulates the RME problem and introduces useful notation. Sec. III ###reference_### analyzes the spatial variability of power maps. Sec. IV ###reference_### derives error bounds for the considered interpolators. Finally, Sec. V ###reference_### presents numerical experiments and\nSec. VI ###reference_### concludes the paper. The proofs can be found in the appendices.\n{nonextendedonly}\nExtended versions of the proofs can be found in [romero2023theoretical]. \nA note on the generalizability of the results here to non-free space environments is given in Appendix H ###reference_###.\nNotation:\n\nSymbol indicates equality by definition.\nIf is a set in a metric space, denotes its closure.\nIf is a set in a vector space, denotes the set of all linear combinations of finitely many elements of .\n is the set of natural numbers, the set of integers, and is the field of real numbers.\nBoldface lowercase (uppercase) letters denote column vectors (matrices).\nVertical concatenation is represented with a semicolon, e.g. .\nA function is represented by a letter, whereas the result of evaluating such a function at a point is denoted as ."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Estimation of Radio Maps",
|
| 15 |
+
"text": "This section introduces power maps and formulates the problem of\nestimating them. Subsequently, useful notation is presented for power maps\nin free space.\nLet comprise the Cartesian coordinates of all points in the geographic area of interest.\n A set222\n\n\nThere may be other sources in the region so long as the measurement process can separate out their aggregate contribution. This can be achieved e.g. by means of spreading codes or pilot sequences. This allows the construction of a wide variety of maps, including signal maps, interference maps, noise maps, and signal-to-interference-plus-noise-ratio (SINR) maps. This is useful e.g. to determine the coverage of a network; see [1 ###reference_b1###] for a list of applications.\n\n of sources (also referred to as transmitters) in a region\n produce an aggregate electric field\n at every point , where\n denotes time. The underscore notation will represent full location vectors in , whereas the notation will be used later when introducing restrictions.\nNeglecting for simplicity polarization\neffects and modeling as an ergodic wide-sense\nstationary random process over for all , the power\nof the signal received by a sensor with an isotropic antenna at\n does not depend on so it can be represented by a function\n. Function , which\ntherefore indicates how power spreads across space, is\na special case of a radio map termed power map and depends on the\ntransmitted signals, the transmitter locations, and the propagation environment.\nThe problem is to estimate a power map given a set of measurements in .\n Specifically, let\n denote the power measured at a set of locations\n .\nFor the ensuing analysis, it is not relevant whether sensors are static, which implies that each one measures at a single spatial location, or mobile, which means that they can measure at multiple spatial locations.333\n\nSince is modeled as an ergodic wide-sense\nstationary random process, the power is a constant, i.e., it does not depend on time. Thus, theoretically there is not a maximum allowed difference between the times at which the measurements must be collected.\nIn practice, is non-stationary and one can think of power as a function of time. Thus, one needs to specify a time scale under which the power does not significantly change. All the measurements must therefore be collected within this time scale.\nDue to the finite observation time spent by a sensor at \nto measure the received power, does not generally equal\n. Instead, certain measurement error\nmust be expected. This is oftentimes expressed as\n, where\n is the measurement error.\nThe power map estimation problem can be formulated as,\n given ,\n\nestimate the function or, equivalently, the values\n for all . The map\nestimate will be denoted as .\nIn this formulation, no information is given about the propagation environment, the positions of the sources, the transmitted power, the radiation pattern of the transmit antennas, and so on.\nThis is why most estimators in the\nliterature are based on interpolation algorithms rather than on electromagnetic propagation models. A detailed taxonomy of these estimators along with relevant references can be found in [1 ###reference_b1###]."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Power Maps in Free Space",
|
| 21 |
+
"text": "Since many of the results in this paper focus on free-space propagation, this section introduces useful notation for this class of maps. The case of general propagation effects will be addressed when discussing some general results and it will be\nthe focus of Sec. V-B ###reference_###, Appendix H ###reference_###, and future work.\nRecall that Friis\u2019 propagation law\nestablishes that the power that a terminal at receives from a transmitter\nat when propagation takes place in free space is given by\nwhere\n is the wavelength,\n is the transmitted power,\n is the antenna gain of the transmitter, and\n is the antenna gain of the receiver.\nSuppose for simplicity that both terminals use\nisotropic antennas, i.e. . Upon letting\n, expression (1 ###reference_###)\nreduces to\nObserve that, as per (2 ###reference_###), as , which is\nnot physically possible. The reason for this disagreement between\n(2 ###reference_###) and the physical reality is that (2 ###reference_###) is an\napproximation valid only in the far field, i.e., when is significantly larger than . Thus, it will be\nrequired throughout that , where is a constant sufficiently\nlarger than .\nIn the presence of multiple sources that transmit uncorrelated444This assumption excludes setups with coordinated multipoint or with multiantenna transmitters that use space-time coding or beamforming. Recall also\nFootnote 2 ###reference_te2###. signals, the individual contributions of each one to the total received power add up and, therefore, the set of all possible power maps is given by\nwhere\n denotes the number of sources, and\n\nand are respectively the coefficient\nand location of the -th source.\nDue to the minimum distance assumption introduced earlier,\n must be such that"
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B 1D and 2D Restrictions",
|
| 27 |
+
"text": "The maps in (3 ###reference_###) are functions\nof three spatial coordinates. However, most works in the literature consider restrictions of such maps to two or one spatial dimensions. This is because the case of two spatial dimensions is of interest when users are on the ground, whereas the case of one spatial dimension is relevant e.g. when one wishes to construct a map along a road or railway. The case of three spatial dimensions is still rare in the literature, but it has already been successfully applied to deploy aerial base stations and aerial relays [8 ###reference_b8###, 6 ###reference_b6###].\n2D Restriction. To consider the restriction of power maps to two spatial\ndimensions, focus without loss of generality (w.l.o.g.) on the values that takes on the\nhorizontal plane\n. To\nthis end, the domain where is defined must be a\nsubset of , i.e., for\nsome . Since each point in\n can be identified by its x and y coordinates, which are\ncollected in , the sought restriction of\n will be defined on .\nBefore presenting the expression for this restriction, some notation is introduced. Upon letting\n and\n, equation (2 ###reference_###) becomes\nwhere\n\n and\n respectively contain the\nhorizontal coordinates of the evaluation and source locations with respect\nto , whereas\n is the squared distance from the source\nlocation to .\n\nThus, although the points where will be evaluated are on\n, the source locations are not required to be on\n.\n###figure_2### With this notation, restricting the maps in\n(3 ###reference_###) to yields\nwhere\n\n is the set where the horizontal coordinates of\nthe sources are allowed to be (which results from the projection of onto\n)\nand contains the allowed values of\n for each vector of source horizontal coordinates , that is,\n.\n\nFig. 2 ###reference_### illustrates the main symbols used in the 2D restriction.\n1D Restriction. Since it is the most insightful case, most results in this paper focus on radio maps in a single spatial dimension, i.e., when the functions in (3 ###reference_###) are restricted to a line. For the same reason, this approach has also been adopted in the WTI [29 ###reference_b29###, Ch. 8]. RME on a line was considered e.g. in [30 ###reference_b30###].\nConsider w.l.o.g. the line\n and suppose that is a subset of . Thus, one can write\n for\nsome , where is the set of\nall vectors with one real entry. Vector notation is sometimes used for\nscalars to simplify the statement of some of the upcoming results.\nWhen and\n, (2 ###reference_###) becomes\nwhere\n\n and\n are the longitudinal\ncoordinates of the evaluation and source locations, whereas\n is the squared distance\nfrom the source location to the mapped line.\nThus, restricting the maps in (3 ###reference_###) to\n yields\nwhere, similarly to the 2D case,\n results from the orthogonal projection of onto\n\nand is the set of allowed values for when the x-coordinate of the source location is , that is,\n.\n\nFig. 3 ###reference_### illustrates the geometric meaning of the main symbols in (8 ###reference_###) while Fig. 4 ###reference_### shows an example of a power map in .\n###figure_3### ###figure_4###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III Spatial Variability of Radio Maps",
|
| 33 |
+
"text": "Having formalized the classes of maps under study,\nthe rest of this section will analyze the variability of the functions\nin and . Specifically,\nSec. III-A ###reference_### and Sec. III-B ###reference_### will\nrespectively present high- and low-variability results. Subsequently,\nSec. IV ###reference_### builds upon these results to derive\nperformance bounds for three interpolation algorithms."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-A High-variability Result",
|
| 39 |
+
"text": "This section establishes that power maps constitute a\nconsiderably rich class of functions. It will follow that, under general\nconditions, power maps cannot be estimated exactly with a finite\nnumber of measurements, even in the absence of noise. In\nother words, a certain error must be expected. Importantly, these\nobservations are not confined to free-space propagation: they hold in\nthe presence of arbitrary propagation phenomena such as reflection,\nrefraction, and diffraction.\nThese conclusions follow from the next\nresult, which establishes that any continuous function can be\napproximated up to arbitrary accuracy as the difference between two\npower maps in free space.\nLet be a compact subset of\n, where is 1 or 2. Then, there exists\n such that the following condition holds:\nfor every continuous function \nand every ,\nThe set can be chosen to be any set that satisfies\nand\nthere exists\n.\n\nSee Appendix A ###reference_###.\nSeveral\nobservations are relevant.\nFirst, function need not be a power map \u2013 it is an arbitrary\ncontinuous function which can even take negative values.\nSecond, conditions (C1) ###reference_i1### and (C2) ###reference_i2### just\nrequire sufficient flexibility to find suitable source locations. One simple example where both conditions hold is to set so that the sources are allowed to be anywhere above a minimum positive height.\nThird, Theorem 1 ###reference_heorem1### establishes that\n is dense in the space of continuous\nfunctions defined on any given compact subset of . As a result,\n is clearly infinite dimensional. Thus, one\nshould not expect to be able to reconstruct a power map exactly with a\nfinite number of measurements, even if those measurements are noiseless.\nFinally, let denote the set of physically possible power maps,\nthat is, the set of power maps consistent with Maxwell\u2019s equations. Up\nto the simplifying assumptions in Friis\u2019 transmission equation, it\nholds that . Thus, the family of functions\n is at least as rich as and, consequently, the\nabove conclusions carry over to arbitrary power maps, not just\nfree-space maps.\nIn view of Theorem 1 ###reference_heorem1###, one may think that\nthere is no hope that power maps can be satisfactorily estimated when\nthe set of measurement locations is finite or even\ncountable. Fortunately, a closer look at Theorem 1 ###reference_heorem1###\nreveals that the variability of the functions in\n may not be as large as it may seem.\n\nFirst and foremost, Theorem 1 ###reference_heorem1### uses the\ndifference rather than a single map to\napproximate . This is because of the requirement on\nnon-negativity of the \u2019s in\n(3 ###reference_###). In other words, if\n were a subspace rather than just a convex\ncone, it would follow from Theorem 1 ###reference_heorem1### that it is possible\nto find two power maps that, given any arbitrary discrete set \nof measurement locations, (i) they take the same values at \nand (ii) they differ arbitrarily at any given point of . This would imply\nthat the error of any reconstruction algorithm that relies on measurements at , even in the absence of noise, would be unbounded. However, this is fortunately not the case and is extensively discussed in the next section.\nSecond, Theorem 1 ###reference_heorem1### does not\nconstrain the transmitted power or the number of sources in \nand . This means that some of these\nquantities may arbitrarily increase as . Thus, in the\npresence of a constraint on the transmitted power or number of\nsources, the variability of power maps may be much more limited than\nit may seem at first glance from Theorem 1 ###reference_heorem1###."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-B Low-variability Results",
|
| 45 |
+
"text": "This section provides upper bounds on the variability of\npower maps. To facilitate the intuitive understanding of the\nfundamental phenomena to be studied, the focus will be on the case\n, in which case any can\nbe written as\nwhere and are such that\n\n\nand ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.2.1",
|
| 49 |
+
"parent_section_id": "3.2",
|
| 50 |
+
"section_name": "III-B1 Spatial Change Rate of Power Maps",
|
| 51 |
+
"text": "The first result upper bounds the first derivative of power maps.\nLet\n be open and let\n. Then,\n\nSee Appendix B ###reference_###.\nThe bound in Lemma 1 ###reference_emma1### is\ntight. It can be seen that equality is attained for a specific arrangement\nwhere all the sources lie on a plane that is perpendicular to .\nTo facilitate the interpretation of (11 ###reference_###), recall that can be expressed as , where is the transmitted power of the -th source. Thus, (11 ###reference_###) can be written as\nObserve that this\nrate decreases cubically with the distance from the sources to\n while it increases linearly with the transmitted power. Thus, the influence of the distance to the sources is much more significant: reducing by a factor of 2 has the same effect as increasing by a factor of 8.\nAlso, the fact that the derivative of in (10 ###reference_###) decreases to zero as becomes arbitrarily farther away from implies that the largest variability occurs in the vicinity of sources. By the above considerations, this variability is largest near the sources that lie close to . This suggests that radio map estimators will generally benefit from collecting a larger number of measurements in those parts of that are near the sources. Interestingly, this is fully consistent with the WTI, which predicts that a larger spatial density of sensors is required near the sources [29 ###reference_b29###, Secs. 8.5.2 and 8.6].\nIt is also interesting to express (12 ###reference_###) after\nnormalization by . In particular, consider the normalized distances and\n.\nThe radio map expressed in terms of becomes and its derivative satisfies\n. Thus, it follows from (12 ###reference_###) that\nAs expected from electromagnetic theory, this expression no longer depends on . Thus, the variability of a power map in the scale of the wavelength is just dependent on the distance of the sources to in units of . The RME problem is invariant to scaling both and all distances by the same factor. This means, for instance, that if one decreases and wishes to attain the same estimation performance, the distance between measurements needs to be decreased by the same factor. Conversely, for a given set of measurement locations, the estimation performance will be worse the shorter is.\nThe next result provides a different view on the variability of radio maps in . Unlike Lemma 1 ###reference_emma1###, which depends on the parameters of each source, the next result provides bounds on the values that a radio map can take at one point given the value that it takes at another point:\nLet\n. If , then\nwhere\nFurthermore, if and\n,\nthe bounds in (14 ###reference_###) are tight, which means that, given , , and , there exists that satisfies either bound in (14 ###reference_###) with equality.\n\nSee Appendix C ###reference_###.\nFig. 5 ###reference_### illustrates the bounds in (14 ###reference_###) for an example of power map when . The areas above the upper bound and below the lower bound are forbidden regions.\nWhen seen as functions of for fixed , the only parameter governing the bounds in (14 ###reference_###) is . Thus, when it comes to the relative change\n, the main factor determining the maximum variability of is the minimum distance between the sources and the mapped region.\nFurthermore, since the lower bound increases with whereas the upper bound decreases with ,\nthe maximum variability of is largest when is smallest.\n###figure_5### Recall that Theorem 1 ###reference_heorem1### established that the set of differences of power maps is dense in the space of continuous functions. It was mentioned that it would be highly problematic if this applied also to the set of power maps themselves. Fortunately, the following follows from Theorem 2 ###reference_heorem2###:\nis not dense in the space of positive continuous functions defined on and with the uniform metric.\nTrivial.\nIn other words, there are continuous functions for which a power map cannot be found that is arbitrarily close to that function.\nA different proof technique can be used to establish a similar result for the case where the map is defined on a circle.\nLet , where . Then is not dense in the space of positive continuous functions defined on .\nW.l.o.g. assume that .\nIt follows from (4 ###reference_###) that the series in [31 ###reference_b31###, eq. (0.8)] contains only a finite number of terms, which implies that the resulting series cannot diverge. Hence, it follows from [31 ###reference_b31###, Th. 2] that is not dense in the space of positive continuous functions defined on ."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2.2",
|
| 55 |
+
"parent_section_id": "3.2",
|
| 56 |
+
"section_name": "III-B2 Spatial Bandwidth of Power Maps",
|
| 57 |
+
"text": "The rest of the section establishes that radio\nmaps in free space are approximately lowpass in terms of spatial frequency. This is not only relevant for purely theoretical reasons, but it is also important to motivate the usage of estimators that rely on this property. Such estimators would go along the lines of what is discussed in [29 ###reference_b29###, Ch. 8] about the spatial bandwidth of the electromagnetic field itself.\nConsider the Fourier transform of :\nwhere is the spatial frequency.\nThe following result555\nIt is considerably easier to establish that decreases at least as\n as just by relying on the identity . Theorem 3 ###reference_heorem3### is more involved to prove but it yields a much stronger result.\n characterizes the frequency content of :\nLet , , and . The following holds:\n\nSee Appendix D ###reference_###.\nExpression (17a ###reference_.1###) establishes that cannot be high-pass. More precisely, one can combine (17b ###reference_.2###) and (17c ###reference_.3###) to quantify the fraction of energy of at high frequencies:\nThis shows that the energy of is concentrated at low frequencies. Furthermore, this concentration becomes exponentially more pronounced as\n increases. Besides, by increasing\n, the concentration of the energy of at low frequencies rapidly grows. Finally, it is also worth pointing out that the WTI also uses the relation between the counterparts of and therein to quantify the complexity of the field through a notion of spatial bandwidth [29 ###reference_b29###, Eq. (8.75)]."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "IV Reconstruction Error Bounds",
|
| 63 |
+
"text": "This section analyzes the reconstruction performance of three simple radio map estimators.\nThe analysis for more sophisticated algorithms will be addressed by future publications. The obtained bounds are summarized in Table I ###reference_###.\nThe reconstruction error has multiple components. One is due to the specific variability of radio maps, which was quantified in Sec. III ###reference_###. Another is due to measurement noise and occurs in any interpolation problem. To focus on the first of these components, it will be assumed that for all .\nRecall that the RME problem formulation from Sec. II ###reference_### is to\nestimate given , where .\nFocusing on the 1D restriction introduced in Sec. II-B ###reference_###, one can rewrite this formulation as estimating given , where is the x-coordinate of the -th measurement location.\nHowever, to simplify some expressions, it is convenient to also allow a countable set of measurements. Thus, the problem will be reformulated as estimating given , where and is a (possibly infinite) countable set of indices. Obviously, the previous formulation is recovered by setting .\nBesides, it will be assumed w.l.o.g. that for all .\nThe performance metrics to be investigated are the conventional and norms used in Lebesgue spaces as well as the norm used in spaces of continuous bounded functions:\nMany of the bounds will be seen to be increasing\nfunctions of the following quantity, which will be referred to as the\nproximity coefficient:\nIn view of this weighted sum of the terms , one will conclude that a poor estimation performance is expected if relatively\nstrong sources are near the mapped region. This agrees with the findings in Sec. III ###reference_###."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.1",
|
| 67 |
+
"parent_section_id": "4",
|
| 68 |
+
"section_name": "IV-A Zeroth-Order Interpolation",
|
| 69 |
+
"text": "Suppose that and666Strictly speaking, does not contain , which violates the assumptions in Sec. II ###reference_###. This has clearly no impact, it just simplifies the exposition.\n .\nThe zeroth-order interpolator considered here is the nearest-neighbor estimator. For each , this estimator produces\nLet be given by (21 ###reference_###) and let . Then:\n\nSee Appendix E ###reference_###.\nFirst, observe that the error becomes 0 if . This is expected since is continuous.\nSecond, the bounds for all these metrics depend on the quantities\ndefining the map (wavelength, transmit power, and source position) only through the proximity coefficient , which therefore\ncondenses the impact of these magnitudes effectively.\nApplying Parseval\u2019s theorem to (17c ###reference_.3###) yields\nThe relative error can therefore be upper bounded as\nInterestingly, if and , then the relative error bound becomes\nThis again suggests that, the closer the sources are to , the smaller the sample spacing necessary for a target relative error. It is also remarkable that (25 ###reference_###) does not depend on the transmitted power in this simple scenario. In fact, (equivalently ) can be thought of as a factor in (24 ###reference_###) that weights the effect of each on the error."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4.2",
|
| 73 |
+
"parent_section_id": "4",
|
| 74 |
+
"section_name": "IV-B First-order Interpolation",
|
| 75 |
+
"text": "As in Sec. IV-A ###reference_###, let\n and .\nThe considered first-order interpolator is the linear interpolator returning a function on that takes the values\nwhere\n,\n,\nand is the only integer\nsuch that .\nThe estimator defined in (26 ###reference_###) satisfies:\n\nSee Appendix F ###reference_###.\nObserve that the bounds in Theorem 5 ###reference_heorem5### are the same as in Theorem 4 ###reference_heorem4### except for multiplicative factors. Therefore, similar observations to those in Sec. IV-A ###reference_### apply here.\nHowever, contrary to what was expected, the constants in\nTheorem 5 ###reference_heorem5### are in fact larger than the ones in Theorem 4 ###reference_heorem4###. This is because the latter bounds are tighter than the former since\nthe worst cases implicitly considered in the proof of Theorem 5 ###reference_heorem5### are more extreme. Notwithstanding, a more tedious derivation777The idea would be to enforce continuity and the derivative bound at the midpoint of each interval . Then, one can maximize the worst-case error with respect to the value that takes at this point. Unfortunately, the derivation becomes cumbersome due to the large number of cases that must be considered. is expected to result in upper bounds for first-order interpolation that are lower than those for zeroth-order interpolation."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.3",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "IV-C Sinc Interpolation",
|
| 81 |
+
"text": "The sinc interpolator gives rise to an exact reconstruction of a bandlimited signal given a set of uniformly-spaced samples that satisfy the Nyquist criterion.\nUsing such a sinc interpolator for reconstructing is therefore motivated by\nTheorem 3 ###reference_heorem3###, which establishes that is approximately bandlimited. This interpolator also plays a central role in the WTI [29 ###reference_b29###].\nSuppose that is observed at a set of uniformly-spaced locations\n,\nwhere are the sampling instants corresponding to offset\n and is the spatial sampling interval.\nConsequently,\n and let .\nIn practice, may correspond to a scenario where a vehicle moves along and collects measurements at regular intervals.\nThe sinc interpolator is defined as\nwhere are the measurements. Consider the following:\nLet be a function with Fourier transform . If one lets\nthen\n\nSee Appendix G ###reference_###.\nThis theorem establishes that the average error across all offsets is proportional to the energy of outside . This is\ntherefore an aliasing error, since there is no physical way of spatially low-pass filtering before acquiring the measurements, as would be performed by an analog-to-digital converter (ADC) in the time domain. It can also be interpreted as the expected error when the offset is uniformly distributed in , which captures the fact that the measurement locations do not generally depend on the coordinate system or itself.\nObserve also that (30 ###reference_###) is an equality, i.e., it is not a bound, and that it applies to arbitrary power maps, not necessarily in free space. \nSubstituting (17b ###reference_.2###) with \nin (30 ###reference_###) yields the bound\nThis bound decreases much faster than the bounds in Theorem 4 ###reference_heorem4### and Theorem 5 ###reference_heorem5### as or upon setting . Furthermore, the error in (31 ###reference_###) is the total error in , whereas the bounds in Theorem 4 ###reference_heorem4### and Theorem 5 ###reference_heorem5### apply only to a bounded interval . In fact, the latter bounds diverge as one considers a longer support (just let with constant ). Thus, the performance guarantees for the sinc interpolator are much stronger.\nA more detailed comparison between these bounds is provided in Sec. V ###reference_###.\nThe case of a single transmitter is a relevant special case of the general problem formulated in Sec. II ###reference_###, where the number of sources is arbitrary. Some of the results in this paper can be readily specialized to this case by setting . This is the case of Lemma 1 ###reference_emma1### and the bounds in Sec. IV ###reference_###. Theorem 2 ###reference_heorem2### remains unaltered if one sets . In contrast, other results, such as Theorem 1 ###reference_heorem1###, Corollary 1 ###reference_orollary1###, and Corollary 2 ###reference_orollary2###, inherently require an arbitrary number of transmitters, so they do not apply to the case . Further considerations regarding the case can be found in [1 ###reference_b1###]."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "5",
|
| 85 |
+
"parent_section_id": null,
|
| 86 |
+
"section_name": "Numerical Experiments",
|
| 87 |
+
"text": "This section provides experiments that empirically corroborate\nthe theoretical findings of the paper."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "5.1",
|
| 91 |
+
"parent_section_id": "5",
|
| 92 |
+
"section_name": "Tightness of the Reconstruction Error Bounds",
|
| 93 |
+
"text": "This section verifies and assesses the tightness of the bounds\nin Sec. IV ###reference_###.\n\nTo this end, is\ngenerated by placing 3 transmitters at a\ndistance from .\nLet to express all lengths in terms of the wavelength and consider . The measurement locations are\n, where .\nUsing these\nmeasurements, each algorithm returns an interpolated function\n,\nwhich is evaluated at 1000 uniformly spaced points in the interval\n to approximate the error metrics in (IV ###reference_###). The values of these parameters are set to capture typical cases in cellular communications.\nFigs. 6 ###reference_### and 7 ###reference_### depict these metrics for zeroth- and first-order interpolation along with their upper bounds in (4 ###reference_###) and (5 ###reference_###), respectively. Observe that the decay rates of the bounds accurately match the decay rate\nof the corresponding error metrics.\nSecond, the error decreases more\nslowly than exponential, which would manifest itself as a straight\nline.\nAlso, the bounds are considerably tight: observe for example that the upper bounds\nfor the error are lower than the error. As anticipated, the bounds are tighter for zeroth-order interpolation than for first-order interpolation. However, the error for the latter is lower than for the former. Thus, first-order interpolation is preferable in terms of performance.\nThe third experiment investigates the error\nof sinc interpolation. Since the upper bound in\n(31 ###reference_###) pertains to the average of the\n error across sampling offsets \n(cf. (30 ###reference_###)), the error is approximated for 20\ndifferent offsets uniformly spaced in and then\naveraged. It is observed in Fig. 8 ###reference_### that, for a\nsufficiently small , both the error metrics and the bound\ndecrease at the same rate, which furthermore is seen to be\nexponential. Thus, the decrease rate of sinc interpolation is\nmuch faster than for zeroth and first-order interpolation. There is,\nhowever, an important caveat: as described in\nSec. IV-C ###reference_###, the bound in\n(31 ###reference_###) is applicable when the sampling\ngrid spans the entire real line, which will be abbreviated as\n. However, in practice and in a simulation, the number\n of sampling locations is finite and,\ntherefore, confined to an interval with finite length. Thus, for the\nupper bound to hold, it is necessary that the interpolation error with\nfinite is sufficiently close to the theoretical interpolation\nerror when . For this to hold, the energy of \nmust be sufficiently concentrated on the observed interval. Otherwise,\nthe omitted terms in (28 ###reference_###), i.e. those\ncorresponding to unobserved values of\n, have a significant\nimpact on the interval where the error metric is being\napproximated. Fig. 8 ###reference_### shows the sharp transition\nbetween both regimes when increases. For sufficiently small\n, function is concentrated in the observation\ninterval. Remarkably, the transition occurs at a rather small value of\n: just note the difference between the scale of\n and the scale of the at which the\ntransition occurs. Thus, although the sinc interpolator is very promising from a theoretical perspective, the finite length of the sampling interval may render it impractical.\n###figure_6### ###figure_7### ###figure_8### ###figure_9###"
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5.2",
|
| 97 |
+
"parent_section_id": "5",
|
| 98 |
+
"section_name": "Experiment with Ray-Tracing Data",
|
| 99 |
+
"text": "This section presents an experiment where a power map is generated using ray-tracing software. The goal is to verify the claim that power maps are more difficult to estimate the closer transmitters are to in a 2D scenario with a realistic channel model.\nTo this end, a collection of power maps was generated, each one for a different height of the transmitters.\n\nThe values of the map are obtained using a 3D model of downtown Ottawa on a rectangular grid with 1 m spacing constructed on a horizontal\nregion of size m and height 2 m.\nIn all maps, 5 transmitters are deployed. The x,y-coordinates of these transmitters are the same across maps. Their z-coordinates are equal within one map but they differ across maps. The transmitters use isotropic antennas and operate at 2.4 GHz with power .\nAt each Monte Carlo iteration, a smaller map is generated by drawing a sub-region of size m uniformly at random from the large map of the considered height. The locations of measurements collected by receivers with isotropic antennas are then drawn uniformly at random from the grid points that lie inside the sub-region but outside the buildings.\nThree simple estimators are used:\n\n(i) The -nearest neighbors estimator with [1 ###reference_b1###],\n(ii) simple Kriging with m [24 ###reference_b24###], and\n(iii) kernel ridge regression (KRR) with a Gaussian kernel of width 10 m and a regularization parameter of 0.001 [32 ###reference_b32###].\nThe performance metric is the normalized mean square error (NMSE) defined as\nwhere and are respectively the vectors collecting the values of the true and estimated power maps at the grid points that lie outside buildings. The expectations are, as indicated, over choices of the sub-region and the measurement locations.\nFig. 9 ###reference_### plots the NMSE of the three\naforementioned estimators vs. the height of the transmitters. As\nexpected, the NMSE decreases for all estimators as the transmitters\nbecome further away from the mapped region, which provides evidence in\nfavor of the aforementioned claim.\nAlthough it was analytically shown\nthat this always occurs in free-space and it was empirically observed\nthat it holds in Fig. 9 ###reference_###, it is important to note\nthat this is not necessarily the case in all situations. For example, in\na setup with , if the transmitter is placed above the center\nof a building with a horizontal metal rooftop, the transmitted signal\nwill not reach the ground when the transmitter is right on the rooftop.\nThis results in the map being identically 0 and, therefore, the\nestimation error for the considered estimators will be 0. However, if the transmitter is a certain height above the building, the rooftop will not block the signal everywhere. Thus, the error in this case will be strictly positive and, therefore, it will violate the claim. The conclusion is that this claim can be used as a guideline but it need not be accurate in all situations."
|
| 100 |
+
},
|
| 101 |
+
{
|
| 102 |
+
"section_id": "6",
|
| 103 |
+
"parent_section_id": null,
|
| 104 |
+
"section_name": "VI Conclusions",
|
| 105 |
+
"text": "This paper studied the problem of reconstructing a power\nmap produced by a set of incoherent sources.\nThe variability of these maps was characterized via upper and lower bounds. Remarkably, power maps are seen to be spatially low-pass.\nThree function reconstruction error metrics were upper bounded for estimators based on zeroth-order, first-order, and sinc interpolation. A simple numerical experiment demonstrates that the bounds are tight and accurately predict the decrease rate with respect to the distance of the sources to the mapped region.\nThis justifies the introduction of the proximity coefficient, which is proportionally related to most of the reconstruction bounds and indicates that the difficulty of the RME problem increases with the transmitted power and decreases with the distance from the sources to the mapped region.\nThe analysis suggests that the sinc interpolator results in a much smaller reconstruction error than zeroth- and first-order interpolators. However, the finite length of the sampling interval in practice implies that the error of the sinc interpolator will be significantly large unless the sources are very close to the mapped region.\nAn experiment with ray-tracing data reveals that the difficulty of the RME problem also tends to increase with the proximity of the sources in non-free space propagation environments.\nBeing the first theoretical analysis in this context, this work suffers from several limitations. As a result, future work may address the estimation of radio maps in higher dimensions and account for noise, correlation among the transmitters, and propagation effects such as reflection, refraction, absorption, and diffraction. Bounds for more sophisticated estimators would also be of interest. It is thus the hope of the authors that this paper opens the door to a fertile research topic in this context."
|
| 106 |
+
}
|
| 107 |
+
],
|
| 108 |
+
"appendix": [
|
| 109 |
+
{
|
| 110 |
+
"section_id": "Appendix 1",
|
| 111 |
+
"parent_section_id": null,
|
| 112 |
+
"section_name": "Appendix A Proof of Theorem\u00a01",
|
| 113 |
+
"text": "If conditions (C1) ###reference_i1### and\n(C2) ###reference_i2### hold, it is clear that \ncontains the set\nFor any , is a reproducing-kernel Hilbert space\n(RKHS) with kernel\nBesides, is universal.\nIt was shown in [33 ###reference_b33###] that\nfunctions of the form are positive definite kernels if can be\nwritten as\nfor some finite Borel measure . Noting that, in\n(34 ###reference_###), and selecting\n such that\nfor all Borel sets , it is easy to see\nthat (35 ###reference_###) holds for in\n(34 ###reference_###). Therefore, in\n(34 ###reference_###) is a positive definite kernel.\nNoting that\nshows that is an RKHS with kernel .\nFinally, observe that the Borel measure associated with \nin (34 ###reference_###) is not concentrated at zero; cf.\n(36 ###reference_###). Thus, it follows from [34 ###reference_b34###, Theorem\n17] that is universal.\n\nDue to the universality of ,\nfor any and continuous , there exists such that .\nSince , it follows that there\nexists such that\n.\nBesides, since takes real values, it follows that\nwhere the second inequality follows from the triangle inequality and the properties of .\nThus, it remains only to write as the\ndifference between two functions in . To this end,\nnote that, since , it follows\n(cf. (37 ###reference_###)) that there exist\n and such that\n. If and are\nsuch that\nthen, it is easy to verify that and\n. Substituting this last\nexpression into (38a ###reference_.1###) yields\n(9 ###reference_###), which concludes the proof."
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"section_id": "Appendix 2",
|
| 117 |
+
"parent_section_id": null,
|
| 118 |
+
"section_name": "Appendix B Proof of Lemma 1",
|
| 119 |
+
"text": ""
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"section_id": "Appendix 3",
|
| 123 |
+
"parent_section_id": null,
|
| 124 |
+
"section_name": "Appendix C Proof of Theorem\u00a02",
|
| 125 |
+
"text": "Without loss of generality, assume that and .\nProving the upper bound in (14 ###reference_###) can be equivalently phrased as finding an upper bound for the value that a power map can take at given that it takes the value at 0, where is arbitrary. Formally, one needs to find an upper bound for\nClearly, this supremum is upper bounded by\nwhere is any set such that . Due to (4 ###reference_###), this condition is satisfied if one enlarges so that and\n.\nFrom (10 ###reference_###), the in (42 ###reference_###) can be expressed as the solution to\nNow introduce auxiliary variables and rewrite the problem as\nBy optimizing first with respect to , it is easy to see that (C ###reference_###) is equivalent to\nwhere is the optimal value of the problem\nTo solve (C ###reference_###), optimize first with respect to to obtain\nSetting the derivative of the objective with respect to equal to zero yields\nThe solution is a maximum and the solution is a minimum. Substituting the former in (C ###reference_###) results in\nLetting yields\nSubstituting (50b ###reference_.2###) into\n(C ###reference_###) and using\n(45b ###reference_.2###) yields a problem that does not\ndepend on and\n. Its optimal value is\ntherefore the desired upper bound in (14 ###reference_###). The lower\nbound in (14 ###reference_###) can be obtained by following a similar\nreasoning. To see that the bounds are tight, it suffices to note\nthat (41 ###reference_###) equals (42 ###reference_###) if\n."
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"section_id": "Appendix 4",
|
| 129 |
+
"parent_section_id": null,
|
| 130 |
+
"section_name": "Appendix D Proof of Theorem\u00a03",
|
| 131 |
+
"text": "Let us start by considering the following result, which provides an explicit form for the Fourier transform of :\nIt holds that:\n\nIt is easy to show that, for any , it follows that\nwhere denotes the Fourier transform.\nTherefore,\n\nIt follows from Lemma 3 ###reference_emma3### that\nwhich establishes (17a ###reference_.1###).\nThe high-pass energy of can be upper bounded as\nwhere and the matrix is such that its -th entry is . Since is symmetric, its eigenvalues are real. In particular, its largest eigenvalue is real. Since all entries of are positive, it follows necessarily that and\n{journalonly}\n.\n\nFurthermore, since all the entries of are positive, is a Perron-Frobenius eigenvalue and, therefore, satisfies [35 ###reference_b35###, eq. (2)] that . It follows that\nwhich establishes (17b ###reference_.2###).\nFinally, to upper bound the total energy, note that\nIt is straightforward to verify that, for any , it holds that\nHence,\nwhich proves (17c ###reference_.3###)."
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"section_id": "Appendix 5",
|
| 135 |
+
"parent_section_id": null,
|
| 136 |
+
"section_name": "Appendix E Proof of Theorem\u00a04",
|
| 137 |
+
"text": "Let\nbe the upper bound on the derivative of provided by (11 ###reference_###).\nTo prove Theorem 4 ###reference_heorem4###, it is convenient to first establish the following result:\nIf , then\nwhere .\nGiven that is differentiable, it follows from the mean-value\ntheorem [36 ###reference_b36###, Th. 5.10] that, for any ,\nFrom , it follows that, for all\n,\nSetting and yields\n or, equivalently,\nOn the other hand, setting and\n results in\n, which can also be written as\n{intermed}\nor\nAdding\n yields\nCombining (65 ###reference_###) and\n(68a ###reference_.1###) yields (4 ###reference_###) for\n. The cases and follow from\ncontinuity.\n\nTo prove Theorem 4 ###reference_heorem4###, it is also convenient to first establish the following result:\nLet and let . It holds that\n\nTo prove (69 ###reference_###), consider the\nfollowing cases:\n(C1) : In this case, it clearly holds that , which implies that . On the other hand, it also holds that , which in turn implies that . Therefore, the left-hand side of (69 ###reference_###) becomes .\n(C2) : Since , it follows that . Furthermore, since , one has that . Hence, the left-hand side of (69 ###reference_###) becomes\n.\n(C3) : Since , it follows that . Since , it holds that . Thus, the left-hand side of (69 ###reference_###) becomes\n.\nNoting that (69 ###reference_###) has been proved for all values of concludes the proof.\n\nFor\n, the nearest neighbor\ninterpolator is given by\nwhere .\nIt follows from (4 ###reference_###) and that, for\n,\nNow applying Lemma 5 ###reference_emma5### yields\nSimilarly, for , one obtains\n{intermed}\nApplying Lemma 5 ###reference_emma5### results in\nThus, combining (72 ###reference_###) and (74a ###reference_.1###) produces the bound\nThe next step is to bound the error. Using the above expressions, it follows that\nCombining this bound for the intervals yields\nwhich, combined with (61 ###reference_###), proves (4 ###reference_.x1###).\nFor the error, one can write the following:\nCombining this bound for the intervals yields\nwhich proves (22a ###reference_.1###) after substitution of (61 ###reference_###).\nFinally, for the error, it follows that\nwhich, together with (61 ###reference_###), proves (22b ###reference_.2###).\nThe rest of the proof involves integrating (75a ###reference_.1###) to obtain the L and L errors and obtaining the suprema on each subinterval to obtain . It is omitted due to lack of space."
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"section_id": "Appendix 6",
|
| 141 |
+
"parent_section_id": null,
|
| 142 |
+
"section_name": "Appendix F Proof of Theorem\u00a05",
|
| 143 |
+
"text": "Using Lemma 4 ###reference_emma4###, it is possible to prove the following:\nThe estimator defined in (26 ###reference_###) satisfies:\n\nIt follows from (4 ###reference_###) that, for ,\nwhere\nLetting , it is then straightforward to verify that\n and\n,\nwhere\nUsing the mean-value theorem to prove that , it can be readily shown that that both coefficients \nand\n\nin (84 ###reference_###) are non-negative .\n\n{journalonly}\nIt can be readily shown that that both coefficients \nand\n\nin (84 ###reference_###) are non-negative (just substitute and\n in (64 ###reference_###) to note that\n).\nSince implies that , it follows\nfrom (82 ###reference_###) that\nfor all .\nUsing (84 ###reference_###), it is also easy to verify that\n. As a\nconsequence, it is easy to see from (F ###reference_###) that\n. Thus, (F ###reference_###) can be\nalternatively expressed as\nObserve that if and only if\n. Hence, it\nsuffices to consider in . In this interval, it is easy to\nsee that .\nThe next step is to bound the error. Using the above expressions, it follows that\nThis is maximum when ,\nwhich yields\nCombining this bound for the intervals yields\nwhich proves (81a ###reference_.1###).\nWhen it comes to the error, one can write the following:\nwhere .\n{intermed}\nSetting the derivative equal to zero, one finds\nThe maximum subject to is attained when , which yields\nAdding this error over the intervals\n{intermed} yields\nwhich\n\nproves (81b ###reference_.2###).\nFinally, for the error, note that\nCombining this bound for all intervals\n{intermed}\nyields\nwhich\n\nproves (81c ###reference_.3###).\nThe rest of the proof involves integrating (86a ###reference_.1###) to obtain the L and L errors and computing the suprema on each subinterval to obtain . It is omitted due to lack of space.\n\nFinally, combining (6 ###reference_###) with\n(61 ###reference_###) completes the proof."
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"section_id": "Appendix 7",
|
| 147 |
+
"parent_section_id": null,
|
| 148 |
+
"section_name": "Appendix G Proof of Theorem\u00a06",
|
| 149 |
+
"text": "Let\n\n and\nThen\nwhere\n\nFrom Parseval\u2019s\nrelation\nNote that\nIn the frequency domain:\nwhere\nTherefore,\nOn the other hand, it is straightforward to see that\nSubstituting (104b ###reference_4.2###) and (105 ###reference_5###) into (99 ###reference_###) yields\n\nFor the following result, consider the shifted signal and its reconstruction\nLet\nIt holds that\nwhere .\n\nLet\nFrom Lemma 7 ###reference_emma7###, it follows that\nwhere\nLetting , it follows\nthat\nNoting that yields\nFrom (112a ###reference_2.1###), it is then easy to see that\nOn the other hand, from (112b ###reference_2.2###), it follows that\nwhere \nis the discrete-time Fourier transform of\nApplying Parseval\u2019s identity to (117 ###reference_7###), it follows that\nSubstituting (115 ###reference_5###) and (119c ###reference_9.3###) into (113 ###reference_3###) concludes the proof.\n\nFinally, noting that shows that in (108 ###reference_8###) equals in (29 ###reference_###). Since both functions are periodic with period , it follows that in (109 ###reference_9###) equals in (30 ###reference_###), which completes the proof."
|
| 150 |
+
},
|
| 151 |
+
{
|
| 152 |
+
"section_id": "Appendix 8",
|
| 153 |
+
"parent_section_id": null,
|
| 154 |
+
"section_name": "Appendix H Arbitrary Path-loss Exponent",
|
| 155 |
+
"text": "The results in this paper\nwere obtained for free-space propagation, where the channel gain adheres\nto (1 ###reference_###). In more complex scenarios, the presence of\nobstacles introduces propagation phenomena such as reflection and\ndiffraction. As a result, the channel gain no longer depends on the transmitter and receiver\nlocations only through their distance. Still, one may be interested in predicting the channel gain\nof a given link based on its distance.\nTo this end, a common trick is\nto use (1 ###reference_###) after replacing the square with a constant\n termed path-loss exponent that is empirically\nadjusted. Although the resulting expression is not physically accurate, one may wonder whether the results in this paper can be extended to an arbirary .\nThe answer is yes for the most part. For example,\nexpressions (10 ###reference_###) and (11 ###reference_###) become\nas a result of this generalization.\nSetting equal to the right-hand side of (120b ###reference_0.2###) and following the same steps as in Appendices E ###reference_### and F ###reference_###, one can readily generalize the error bounds for zeroth- and first-order interpolation.\nOn the other hand, the variability bounds in Theorem 3 ###reference_heorem3### and the error bounds in Sec. IV-C ###reference_### are not easily generalizable to arbitrary path-loss exponents. This is because of the different nature of the proof techniques used therein.\nTo sum up, some of the results in this paper can be extended to arbitrary path-loss exponents, but this is not fully meaningful as the model with is not physically accurate. An analysis that accurately captures actual propagation phenomena will be the subject of future publications."
|
| 156 |
+
}
|
| 157 |
+
],
|
| 158 |
+
"tables": {
|
| 159 |
+
"1": {
|
| 160 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S4.T1.16.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S4.T1.17.2\" style=\"font-size:90%;\">Upper bounds on the reconstruction error</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.14\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.14.15.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.14.15.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.14.15.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Zeroth-order interpolation</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.14.15.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">First-order interpolation</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.14.15.1.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">Sinc\ninterpolation</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.4.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.5.1\">Interpolator</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.2.2.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.2.2.2.2\">\n<span class=\"ltx_tr\" id=\"S4.T1.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.1.1.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span></span>\n<span class=\"ltx_tr\" id=\"S4.T1.2.2.2.2.2.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T1.2.2.2.2.2.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span></span>\n</span></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.3.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.3.3.3.1\"></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.4.1\"></span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_tt\" id=\"S4.T1.5.5.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.5.5.1.1\"></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.6.6.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.6.6.2.1\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.7.7.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.7.7.3.1\"></span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"S4.T1.7.7.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.11.11\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.8.8.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.8.8.1.1\"></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.9.9.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.9.9.2.1\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.10.10.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.10.10.3.1\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.11.11.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.11.11.4.1\"></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.14.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_rr ltx_border_t\" id=\"S4.T1.12.12.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.12.12.1.1\"></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.13.13.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.13.13.2.1\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.14.14.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text\" id=\"S4.T1.14.14.3.1\"></span></td>\n<td class=\"ltx_td ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.14.14.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
|
| 161 |
+
"capture": "TABLE I: Upper bounds on the reconstruction error"
|
| 162 |
+
}
|
| 163 |
+
},
|
| 164 |
+
"image_paths": {
|
| 165 |
+
"1": {
|
| 166 |
+
"figure_path": "2310.15106v4_figure_1.png",
|
| 167 |
+
"caption": "Figure 1: Example of power map where a spatially dense set of measurements was collected using an unmanned aerial vehicle [24].",
|
| 168 |
+
"url": "http://arxiv.org/html/2310.15106v4/x1.png"
|
| 169 |
+
},
|
| 170 |
+
"2": {
|
| 171 |
+
"figure_path": "2310.15106v4_figure_2.png",
|
| 172 |
+
"caption": "Figure 2: Visual depiction of the setup for estimating a power map in two spatial dimensions. This is the most common setup in the literature.",
|
| 173 |
+
"url": "http://arxiv.org/html/2310.15106v4/x2.png"
|
| 174 |
+
},
|
| 175 |
+
"3": {
|
| 176 |
+
"figure_path": "2310.15106v4_figure_3.png",
|
| 177 |
+
"caption": "Figure 3: Visual depiction of the setup for estimating a power map in one spatial dimension. This is of interest e.g. when a map must be estimated along a road.",
|
| 178 |
+
"url": "http://arxiv.org/html/2310.15106v4/x3.png"
|
| 179 |
+
},
|
| 180 |
+
"4": {
|
| 181 |
+
"figure_path": "2310.15106v4_figure_4.png",
|
| 182 |
+
"caption": "Figure 4: The black curve shows an example of a power map in \ud835\udca2FS(1)superscriptsubscript\ud835\udca2FS1{{{{\\mathcal{G}}}_{\\text{FS}}^{(1)}}}caligraphic_G start_POSTSUBSCRIPT FS end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT where S=3\ud835\udc463{{S}}=3italic_S = 3, [r\u00b4x,1,r\u00b4x,2,r\u00b4x,3]=[1,5,8]subscript\u00b4\ud835\udc5f\ud835\udc651subscript\u00b4\ud835\udc5f\ud835\udc652subscript\u00b4\ud835\udc5f\ud835\udc653158[{{\\acute{r}}}_{x,1},{{\\acute{r}}}_{x,2},{{\\acute{r}}}_{x,3}]=[1,5,8][ over\u00b4 start_ARG italic_r end_ARG start_POSTSUBSCRIPT italic_x , 1 end_POSTSUBSCRIPT , over\u00b4 start_ARG italic_r end_ARG start_POSTSUBSCRIPT italic_x , 2 end_POSTSUBSCRIPT , over\u00b4 start_ARG italic_r end_ARG start_POSTSUBSCRIPT italic_x , 3 end_POSTSUBSCRIPT ] = [ 1 , 5 , 8 ], [\u03b21,\u03b22,\u03b23]=[1,3,2]subscript\ud835\udefd1subscript\ud835\udefd2subscript\ud835\udefd3132[{{\\beta}}_{1},{{\\beta}}_{2},{{\\beta}}_{3}]=[1,3,2][ italic_\u03b2 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b2 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ] = [ 1 , 3 , 2 ], and [\u03b11,\u03b12,\u03b13]=[1,3,3]subscript\ud835\udefc1subscript\ud835\udefc2subscript\ud835\udefc3133[{{\\alpha}}_{1},{{\\alpha}}_{2},{{\\alpha}}_{3}]=[1,3,3][ italic_\u03b1 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_\u03b1 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ] = [ 1 , 3 , 3 ]. The blue lines correspond to the contribution of each source. The maximum of each one is at the corresponding value of r\u00b4x,ssubscript\u00b4\ud835\udc5f\ud835\udc65\ud835\udc60{{\\acute{r}}}_{x,{{s}}}over\u00b4 start_ARG italic_r end_ARG start_POSTSUBSCRIPT italic_x , italic_s end_POSTSUBSCRIPT.\nAlthough source s=1\ud835\udc601{{s}}=1italic_s = 1 has the lowest power, it is closer to \u2112\u2112{{\\mathcal{L}}}caligraphic_L than the other sources and this results in the largest contribution to \u03b3\ud835\udefe{{\\gamma}}italic_\u03b3 and its derivative \u03b3\u2032superscript\ud835\udefe\u2032{{\\gamma}}^{\\prime}italic_\u03b3 start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT.",
|
| 183 |
+
"url": "http://arxiv.org/html/2310.15106v4/x4.png"
|
| 184 |
+
},
|
| 185 |
+
"5": {
|
| 186 |
+
"figure_path": "2310.15106v4_figure_5.png",
|
| 187 |
+
"caption": "Figure 5: \n\nIllustration of the bounds on the variability of a radio map provided by Theorem 2. Given the value of \u03b3\ud835\udefe{{\\gamma}}italic_\u03b3 at a point rxsubscript\ud835\udc5f\ud835\udc65{{r}}_{x}italic_r start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT (rx=0subscript\ud835\udc5f\ud835\udc650{{r}}_{x}=0italic_r start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT = 0 in the figure), the values that \u03b3\ud835\udefe{{\\gamma}}italic_\u03b3 can take at any other point are restricted by (14). Thus, the areas above the upper bound and below the lower bound are forbidden regions.",
|
| 188 |
+
"url": "http://arxiv.org/html/2310.15106v4/x5.png"
|
| 189 |
+
},
|
| 190 |
+
"6": {
|
| 191 |
+
"figure_path": "2310.15106v4_figure_6.png",
|
| 192 |
+
"caption": "Figure 6: Error metrics along with their upper bounds (4)-(22b) for the zeroth-order interpolation estimator (21).",
|
| 193 |
+
"url": "http://arxiv.org/html/2310.15106v4/x6.png"
|
| 194 |
+
},
|
| 195 |
+
"7": {
|
| 196 |
+
"figure_path": "2310.15106v4_figure_7.png",
|
| 197 |
+
"caption": "Figure 7: Error metrics along with their upper bounds (27a)-(27c) for the first-order interpolation estimator (26).",
|
| 198 |
+
"url": "http://arxiv.org/html/2310.15106v4/x7.png"
|
| 199 |
+
},
|
| 200 |
+
"8": {
|
| 201 |
+
"figure_path": "2310.15106v4_figure_8.png",
|
| 202 |
+
"caption": "Figure 8: Error metrics along with the upper bound for the L2superscript\ud835\udc3f2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT error\n(31) for sinc interpolation.",
|
| 203 |
+
"url": "http://arxiv.org/html/2310.15106v4/x8.png"
|
| 204 |
+
},
|
| 205 |
+
"9": {
|
| 206 |
+
"figure_path": "2310.15106v4_figure_9.png",
|
| 207 |
+
"caption": "Figure 9: \n\nNorm mean square error vs. transmitter heights.",
|
| 208 |
+
"url": "http://arxiv.org/html/2310.15106v4/x9.png"
|
| 209 |
+
}
|
| 210 |
+
},
|
| 211 |
+
"validation": true,
|
| 212 |
+
"references": [
|
| 213 |
+
{
|
| 214 |
+
"1": {
|
| 215 |
+
"title": "\u201cRadio map estimation: A data-driven approach to spectrum\ncartography,\u201d",
|
| 216 |
+
"author": "D. Romero and S.-J. Kim,",
|
| 217 |
+
"venue": "IEEE Signal Process. Mag., vol. 39, no. 6, pp. 53\u201372, 2022.",
|
| 218 |
+
"url": null
|
| 219 |
+
}
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"2": {
|
| 223 |
+
"title": "\u201cOptimal predictive resource allocation: Exploiting mobility\npatterns and radio maps,\u201d",
|
| 224 |
+
"author": "H. Abou-zeid, H. S. Hassanein, and S. Valentin,",
|
| 225 |
+
"venue": "in Global Commun. Conf., 2013, pp. 4877\u20134882.",
|
| 226 |
+
"url": null
|
| 227 |
+
}
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"3": {
|
| 231 |
+
"title": "\u201cTowards practical REM-based radio resource management,\u201d",
|
| 232 |
+
"author": "S. Subramani, J. Riihij\u00e4rvi, B. Sayrac, L. Gavrilovska, M. Sooriyabandara,\nT. Farnham, and P. M\u00e4h\u00f6nen,",
|
| 233 |
+
"venue": "in Future Network & Mobile Summit, 2011, pp. 1\u20138.",
|
| 234 |
+
"url": null
|
| 235 |
+
}
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"4": {
|
| 239 |
+
"title": "\u201cDesign of layered radio environment maps for RAN optimization in\nheterogeneous LTE systems,\u201d",
|
| 240 |
+
"author": "T. Cai, J. van de Beek, B. Sayrac, S. Grimoud, J. Nasreddine, J. Riihij\u00e4rvi,\nand P. M\u00e4h\u00f6nen,",
|
| 241 |
+
"venue": "in Int. Symp. Personal, Indoor Mobile Radio Commun., 2011, pp.\n172\u2013176.",
|
| 242 |
+
"url": null
|
| 243 |
+
}
|
| 244 |
+
},
|
| 245 |
+
{
|
| 246 |
+
"5": {
|
| 247 |
+
"title": "\u201cFemtocell downlink power control based on radio environment maps,\u201d",
|
| 248 |
+
"author": "A. Zalonis, N. Dimitriou, A. Polydoros, J. Nasreddine, and P. M\u00e4h\u00f6nen,",
|
| 249 |
+
"venue": "in Wireless Commun. Networking Conf., 2012, pp. 1224\u20131228.",
|
| 250 |
+
"url": null
|
| 251 |
+
}
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"6": {
|
| 255 |
+
"title": "\u201cAerial base station placement leveraging radio tomographic maps,\u201d",
|
| 256 |
+
"author": "D. Romero, P. Q. Viet, and G. Leus,",
|
| 257 |
+
"venue": "in IEEE Int. Conf. Acoustics Speech Signal Process., Singapore,\n2022, IEEE, pp. 5358\u20135362.",
|
| 258 |
+
"url": null
|
| 259 |
+
}
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"7": {
|
| 263 |
+
"title": "\u201cAerial base station placement: A tutorial introduction,\u201d",
|
| 264 |
+
"author": "P. Q. Viet and D. Romero,",
|
| 265 |
+
"venue": "IEEE Commun. Mag., vol. 60, no. 5, pp. 44\u201349, 2022.",
|
| 266 |
+
"url": null
|
| 267 |
+
}
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"8": {
|
| 271 |
+
"title": "\u201cProbabilistic roadmaps for aerial relay path planning,\u201d",
|
| 272 |
+
"author": "P. Q. Viet and D. Romero,",
|
| 273 |
+
"venue": "in IEEE Glob. Commun. Conf., 2023.",
|
| 274 |
+
"url": null
|
| 275 |
+
}
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"9": {
|
| 279 |
+
"title": "\u201cSpectrum cartography using quantized observations,\u201d",
|
| 280 |
+
"author": "D. Romero, S.-J. Kim, R. L\u00f3pez-Valcarce, and G. B. Giannakis,",
|
| 281 |
+
"venue": "in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process.,\nBrisbane, Australia, Apr. 2015, pp. 3252 \u2013 3256.",
|
| 282 |
+
"url": null
|
| 283 |
+
}
|
| 284 |
+
},
|
| 285 |
+
{
|
| 286 |
+
"10": {
|
| 287 |
+
"title": "\u201cInformed spectrum usage in cognitive radio networks: Interference\ncartography,\u201d",
|
| 288 |
+
"author": "A. Alaya-Feki, S. B. Jemaa, B. Sayrac, P. Houze, and E. Moulines,",
|
| 289 |
+
"venue": "in Proc. IEEE Int. Symp. Personal, Indoor Mobile Radio Commun.,\nCannes, France, Sep. 2008, pp. 1\u20135.",
|
| 290 |
+
"url": null
|
| 291 |
+
}
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"11": {
|
| 295 |
+
"title": "\u201cPredictive spectrum occupancy probability-based spatio-temporal\ndynamic channel allocation map for future cognitive wireless networks,\u201d",
|
| 296 |
+
"author": "A. Agarwal and R. Gangopadhyay,",
|
| 297 |
+
"venue": "Trans. Emerging Telecommun. Technol., vol. 29, no. 8, pp.\ne3442, 2018.",
|
| 298 |
+
"url": null
|
| 299 |
+
}
|
| 300 |
+
},
|
| 301 |
+
{
|
| 302 |
+
"12": {
|
| 303 |
+
"title": "\u201cSpectrum surveying: Active radio map estimation with autonomous\nUAVs,\u201d",
|
| 304 |
+
"author": "R. Shrestha, D. Romero, and S. P. Chepuri,",
|
| 305 |
+
"venue": "IEEE Trans. Wireless Commun., vol. 22, no. 1, pp. 627\u2013641,\n2022.",
|
| 306 |
+
"url": null
|
| 307 |
+
}
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"13": {
|
| 311 |
+
"title": "\u201cDistributed spectrum sensing for cognitive radio networks by\nexploiting sparsity,\u201d",
|
| 312 |
+
"author": "J.-A. Bazerque and G. B. Giannakis,",
|
| 313 |
+
"venue": "IEEE Trans. Signal Process., vol. 58, no. 3, pp. 1847\u20131862,\nMar. 2010.",
|
| 314 |
+
"url": null
|
| 315 |
+
}
|
| 316 |
+
},
|
| 317 |
+
{
|
| 318 |
+
"14": {
|
| 319 |
+
"title": "\u201cGroup-lasso on splines for spectrum cartography,\u201d",
|
| 320 |
+
"author": "J.-A. Bazerque, G. Mateos, and G. B. Giannakis,",
|
| 321 |
+
"venue": "IEEE Trans. Signal Process., vol. 59, no. 10, pp. 4648\u20134663,\nOct. 2011.",
|
| 322 |
+
"url": null
|
| 323 |
+
}
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"15": {
|
| 327 |
+
"title": "\u201cImproved performance of spectrum cartography based on compressive\nsensing in cognitive radio networks,\u201d",
|
| 328 |
+
"author": "B. A. Jayawickrama, E. Dutkiewicz, I. Oppermann, G. Fang, and J. Ding,",
|
| 329 |
+
"venue": "in Proc. IEEE Int. Commun. Conf., Budapest, Hungary, Jun. 2013,\npp. 5657\u20135661.",
|
| 330 |
+
"url": null
|
| 331 |
+
}
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"16": {
|
| 335 |
+
"title": "\u201cAirMAP: Scalable spectrum occupancy recovery using local\nlow-rank matrix approximation,\u201d",
|
| 336 |
+
"author": "B. Khalfi, B. Hamdaoui, and M. Guizani,",
|
| 337 |
+
"venue": "in IEEE Glob. Commun. Conf., Abu Dhabi, UAE, Dec. 2018.",
|
| 338 |
+
"url": null
|
| 339 |
+
}
|
| 340 |
+
},
|
| 341 |
+
{
|
| 342 |
+
"17": {
|
| 343 |
+
"title": "\u201cTensor completion for radio map reconstruction using low rank and\nsmoothness,\u201d",
|
| 344 |
+
"author": "D. Sch\u00e4ufele, R. L. G. Cavalcante, and S. Mtanczak,",
|
| 345 |
+
"venue": "in IEEE Int. Workshop Signal Process. Advances Wireless\nCommun., Cannes, France, Jul. 2019.",
|
| 346 |
+
"url": null
|
| 347 |
+
}
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"18": {
|
| 351 |
+
"title": "\u201cCognitive radio spectrum prediction using dictionary learning,\u201d",
|
| 352 |
+
"author": "S.-J. Kim and G. B. Giannakis,",
|
| 353 |
+
"venue": "in Proc. IEEE Global Commun. Conf., Atlanta, GA, Dec. 2013, pp.\n3206 \u2013 3211.",
|
| 354 |
+
"url": null
|
| 355 |
+
}
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"19": {
|
| 359 |
+
"title": "\u201cRadio maps for beam alignment in mmwave communications with\nlocation uncertainty,\u201d",
|
| 360 |
+
"author": "T. N. Ha, D. Romero, and R. L\u00f3pez-Valcarce,",
|
| 361 |
+
"venue": "in IEEE Vehicular Tech. Conf. Workshop, Spring, Singapore,\n2024.",
|
| 362 |
+
"url": null
|
| 363 |
+
}
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"20": {
|
| 367 |
+
"title": "\u201cSpatial signal strength prediction using 3D maps and deep\nlearning,\u201d",
|
| 368 |
+
"author": "E. Krijestorac, S. Hanna, and D. Cabric,",
|
| 369 |
+
"venue": "in Proc. IEEE Int Conf. Commun. IEEE, 2021, pp. 1\u20136.",
|
| 370 |
+
"url": null
|
| 371 |
+
}
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"21": {
|
| 375 |
+
"title": "\u201cRadioUNet: Fast radio map estimation with convolutional neural\nnetworks,\u201d",
|
| 376 |
+
"author": "R. Levie, \u00c7. Yapar, G. Kutyniok, and G. Caire,",
|
| 377 |
+
"venue": "IEEE Trans. Wireless Commun., vol. 20, no. 6, pp. 4001\u20134015,\n2021.",
|
| 378 |
+
"url": null
|
| 379 |
+
}
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"22": {
|
| 383 |
+
"title": "\u201cA power spectrum maps estimation algorithm based on generative\nadversarial networks for underlay cognitive radio networks,\u201d",
|
| 384 |
+
"author": "X. Han, L. Xue, F. Shao, and Y. Xu,",
|
| 385 |
+
"venue": "Sensors, vol. 20, no. 1, pp. 311, Jan. 2020.",
|
| 386 |
+
"url": null
|
| 387 |
+
}
|
| 388 |
+
},
|
| 389 |
+
{
|
| 390 |
+
"23": {
|
| 391 |
+
"title": "\u201cDeep completion autoencoders for radio map estimation,\u201d",
|
| 392 |
+
"author": "Y. Teganya and D. Romero,",
|
| 393 |
+
"venue": "IEEE Trans. Wireless Commun., vol. 21, no. 3, pp. 1710\u20131724,\n2021.",
|
| 394 |
+
"url": null
|
| 395 |
+
}
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"24": {
|
| 399 |
+
"title": "\u201cRadio map estimation in the real-world: Empirical validation and\nanalysis,\u201d",
|
| 400 |
+
"author": "R. Shrestha, T. N. Ha, P. Q. Viet, and D. Romero,",
|
| 401 |
+
"venue": "in 2023 IEEE Conf. on Antenna Measurements and Appl. (CAMA),\nGenoa, Italy, 2023, pp. 169\u2013174.",
|
| 402 |
+
"url": null
|
| 403 |
+
}
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"25": {
|
| 407 |
+
"title": "\u201cA hidden environment model for constructing indoor radio maps,\u201d",
|
| 408 |
+
"author": "Z. Xiang, H. Zhang, J. Huang, S. Song, and K.C. Almeroth,",
|
| 409 |
+
"venue": "in IEEE Int. Symp. World Wireless Mobile Multimedia Net., 2005,\npp. 395\u2013400.",
|
| 410 |
+
"url": null
|
| 411 |
+
}
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"26": {
|
| 415 |
+
"title": "\u201cEfficient radio map construction based on low-rank approximation\nfor indoor positioning,\u201d",
|
| 416 |
+
"author": "Y. Hu, W. Zhou, Z. Wen, Y. Sun, and B. Yin,",
|
| 417 |
+
"venue": "Math. Probl. Eng., vol. 2013, 2013.",
|
| 418 |
+
"url": null
|
| 419 |
+
}
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"27": {
|
| 423 |
+
"title": "\u201cUpdating wireless signal map with Bayesian compressive sensing,\u201d",
|
| 424 |
+
"author": "B. Yang, S. He, and S.-H. G. Chan,",
|
| 425 |
+
"venue": "in Proc. ACM Int. Conf. Mod., Anal. and Simu. Wireless and\nMobile Sys., New York, NY, USA, 2016, MSWiM \u201916, pp. 310\u2013317, Association\nfor Computing Machinery.",
|
| 426 |
+
"url": null
|
| 427 |
+
}
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"28": {
|
| 431 |
+
"title": "\u201cRecNet: A convolutional network for efficient radiomap\nreconstruction,\u201d",
|
| 432 |
+
"author": "Q. Niu, Y. Nie, S. He, N. Liu, and X. Luo,",
|
| 433 |
+
"venue": "in IEEE Int. Conf. Commun., 2018, pp. 1\u20137.",
|
| 434 |
+
"url": null
|
| 435 |
+
}
|
| 436 |
+
},
|
| 437 |
+
{
|
| 438 |
+
"29": {
|
| 439 |
+
"title": "Wave Theory of Information,",
|
| 440 |
+
"author": "M. Franceschetti,",
|
| 441 |
+
"venue": "Cambridge University Press, 2018.",
|
| 442 |
+
"url": null
|
| 443 |
+
}
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"30": {
|
| 447 |
+
"title": "\u201cStochastic semiparametric regression for spectrum cartography,\u201d",
|
| 448 |
+
"author": "D. Romero, S.-J. Kim, and G. B. Giannakis,",
|
| 449 |
+
"venue": "in Proc. IEEE Int. Workshop Comput. Advan. Multi-Sensor Adapt.\nProcess., Cancun, Mexico, Dec. 2015, pp. 513\u2013516.",
|
| 450 |
+
"url": null
|
| 451 |
+
}
|
| 452 |
+
},
|
| 453 |
+
{
|
| 454 |
+
"31": {
|
| 455 |
+
"title": "\u201cBases for positive continuous functions,\u201d",
|
| 456 |
+
"author": "W. K. Hayman and T. J. Lyons,",
|
| 457 |
+
"venue": "J. London Math. Society, vol. 2, no. 2, pp. 292\u2013308, 1990.",
|
| 458 |
+
"url": null
|
| 459 |
+
}
|
| 460 |
+
},
|
| 461 |
+
{
|
| 462 |
+
"32": {
|
| 463 |
+
"title": "\u201cLearning power spectrum maps from quantized power measurements,\u201d",
|
| 464 |
+
"author": "D. Romero, S-J. Kim, G. B. Giannakis, and R. L\u00f3pez-Valcarce,",
|
| 465 |
+
"venue": "IEEE Trans. Signal Process., vol. 65, no. 10, pp. 2547\u20132560,\nMay 2017.",
|
| 466 |
+
"url": null
|
| 467 |
+
}
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"33": {
|
| 471 |
+
"title": "\u201cMetric spaces and completely monotone functions,\u201d",
|
| 472 |
+
"author": "I. J. Schoenberg,",
|
| 473 |
+
"venue": "Annals Mathematics, pp. 811\u2013841, 1938.",
|
| 474 |
+
"url": null
|
| 475 |
+
}
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"34": {
|
| 479 |
+
"title": "\u201cUniversal kernels,\u201d",
|
| 480 |
+
"author": "C. A. Micchelli, Y. Xu, and H. Zhang,",
|
| 481 |
+
"venue": "J. Mach. Learn. Res., vol. 7, pp. 2651\u20132667, Dec. 2006.",
|
| 482 |
+
"url": null
|
| 483 |
+
}
|
| 484 |
+
},
|
| 485 |
+
{
|
| 486 |
+
"35": {
|
| 487 |
+
"title": "\u201cThe Perron-Frobenius theorem: some of its applications,\u201d",
|
| 488 |
+
"author": "S.U. Pillai, T. Suel, and S. Cha,",
|
| 489 |
+
"venue": "IEEE Signal Processing Mag., vol. 22, no. 2, pp. 62\u201375, 2005.",
|
| 490 |
+
"url": null
|
| 491 |
+
}
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"36": {
|
| 495 |
+
"title": "Principles of mathematical analysis,",
|
| 496 |
+
"author": "W. Rudin,",
|
| 497 |
+
"venue": "1953.",
|
| 498 |
+
"url": null
|
| 499 |
+
}
|
| 500 |
+
}
|
| 501 |
+
],
|
| 502 |
+
"url": "http://arxiv.org/html/2310.15106v4"
|
| 503 |
+
}
|
20240323/2310.18847v2.json
ADDED
|
@@ -0,0 +1,356 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Bird\u2019s Eye View Based Pretrained World model for Visual Navigation",
|
| 3 |
+
"abstract": "Sim2Real transfer has gained popularity because it helps transfer from inexpensive simulators to real world. This paper presents a novel system that fuses components in a traditional World Model into a robust system, trained entirely within a simulator, that Zero-Shot transfers to the real world. To facilitate transfer, we use an intermediary representation that is based on Bird\u2019s Eye View (BEV) images. Thus, our robot learns to navigate in a simulator by first learning to translate from complex First-Person View (FPV) based RGB images to BEV representations, then learning to navigate using those representations. Later, when tested in the real world, the robot uses the perception model that translates FPV-based RGB images to embeddings that were learned by the FPV to BEV translator and that can be used by the downstream policy.\nThe incorporation of state-checking modules using Anchor images and Mixture Density LSTM not only interpolates uncertain and missing observations but also enhances the robustness of the model in the real-world. We trained the model using data from a Differential drive robot in the CARLA simulator. Our methodology\u2019s effectiveness is shown through the deployment of trained models onto a real-world Differential drive robot. Lastly we release a comprehensive codebase, dataset and models for training and deployment (https://sites.google.com/view/value-explicit-pretraining).",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Reinforcement Learning (RL) has predominantly been conducted in simulator environments, primarily due to the prohibitive costs associated with conducting trial-and-error processes in the real world. With the advances in graphics and computational technologies, there has been a significant development in realistic simulators that capture the system (robot) information. However, the domain gap between synthetic and real data introduces a substantial performance drop when the models are directly deployed into real-world applications after training, a phenomenon commonly referred to as the Sim2Real gap.\nTraditionally, Sim2real transfer methods either optimize training on a simulation that closely resembles real-world data, or use Domain Randomization [26 ###reference_bx26###] or Domain Adaptation [27 ###reference_bx27###]. Other works [4 ###reference_bx4###] train on a simulated environment and deploy to self-driving cars. However, since these models were not trained in cluttered, pedestrian-rich environments, they would not generalize to some real-world scenarios. Some of the recent works, such as [25 ###reference_bx25###] and [28 ###reference_bx28###], have shown promising results in attempting to cope with the Sim2real gap using Style Transfer for data generation using limited real-world data but do not focus on learning optimal representations for performing the navigation task. On the other hand, models that are trained end-to-end in a simulator overfit to the trained task, without learning generalizable representations [1 ###reference_bx1###]. With all these practical considerations, it is imperative that we design a robust and low resource-footprint model that enables a mobile-robot to efficiently function in diverse scenarios.\nIn this paper, we formulate a new setting for Zero-shot Sim2Real transfer for Visual Navigation without Maps, involving data obtained from the CARLA simulator, as outlined in Fig. 1 ###reference_###. To avoid any Sim2Real gap within the control pipeline and focus only on the perception transfer, we built a Differential-drive based robot in the CARLA simulator that closely resembles our real-world robot. Using this setup, we build a large dataset consisting of First-person view (FPV) and Bird\u2019s eye view (BEV) image sequences from the CARLA [7 ###reference_bx7###] simulator. The system is trained entirely on this simulated dataset and is frozen and deployed on a real-world mobile robot.\n###figure_1### ###figure_2###"
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Related work",
|
| 15 |
+
"text": "Although, many methods [8 ###reference_bx8###, 12 ###reference_bx12###, 14 ###reference_bx14###, 15 ###reference_bx15###] use simulators for learning through an extensive amount of experiences that could be used to train a model policy end to end, some recent works [3 ###reference_bx3###] have shown promising results, on various tasks [11 ###reference_bx11###], using encoders that are pretrained on large unlabelled expert data and then train a significantly smaller network on top of the frozen encoder. Since these encoders are not trained on a specific task, we call it pretraining. Representations estimated using these pretrained and frozen encoders would help the model remain lightweight and flexible, which is desirable for mobile platforms. In our work, we employ such an approach with a new pre-training objective (to reconstruct BEV maps from FPV inputs), which we show provides very good generalization for downstream robotics tasks. Since learning representations does not involve any dynamics, any navigation dataset consisting of FPV-BEV could be used to pretrain the encoder. By training a Vision encoder using a large aggregated dataset, this could be a comparable alternative to the current ViT\u2019s [20 ###reference_bx20###] used for Robotics.\nBird\u2019s Eye View (BEV) based representation allows for a compact representation of the scene, invariant to any texture changes, scene variations, occlusions or lightning differences in an RGB image. This makes for an optimal representation for PointGoal Navigation. Furthermore, it is one of the most efficient and lightweight form of information, since the BEV maps are binary. For example, the corresponding BEV image of an 1MB FPV image is around 0.5KB. Some works estimate BEV maps from RGB images, such as [16 ###reference_bx16###], [19 ###reference_bx19###] and [21 ###reference_bx21###]. However, these map predictions from FPV images are typically only evaluated for visual tasks, with a lack of evidence that BEV-based representations can be useful for robotic tasks. Furthermore, [2 ###reference_bx2###] have shown that reconstruction-based methods like VAE [13 ###reference_bx13###] perform close to Random encoders. Incorporating these representations as inputs for training downstream models for robotic tasks to ensure their compatibility indeed is challenging. Our pretraining approach not only allows for learning visual representations that are optimal for robotic tasks, but also allows these representations to reconstruct the corresponding BEV map. Together, they allow the lightweight policy model to efficiently learn the task through these representations.\nRecurrent world-models. [9 ###reference_bx9###] introduces a novel approach to RL, incorporating a vision model for sensory data representation and a memory model for capturing temporal dynamics, all of which collectively improve agent performance. Apart from the advantages of pertaining each module, some of the modules in this architecture can be frozen after learning the representation of the environment, paving the way for more efficient and capable RL agents.\nWe propose a novel training regime and develop a perception model pretrained on a large simulated dataset to translate FPV-based RGB images into embeddings that align with the representations of the corresponding BEV images. Along with that, we upgrade the existing world models framework using a novel model-based Temporal State Checking (TSC) and Anchor State Checking (ASC) methods that add robustness to the navigation pipeline when transferred to the real world. We release the code for pre-training, RL training and ROS-based deployment of our system on a real-world robot, FPV-BEV dataset and pre-trained models. With the above contributions, we hope move closer towards open-sourcing a robust Visual Navigation system that uses pre-trained models trained on large datasets and simulations for efficient representation learning.\n###figure_3###"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Proposed Method",
|
| 21 |
+
"text": "For an autonomous agent to navigate using camera imagery, we use a simple system that consists of a perception model and a control model as shown in 3 ###reference_###. The perception model takes input observation and outputs an embedding that is then passed on to the policy, as part of the control model to output an action vector , throttle and steer. We first outline the perception model, with the objective of efficiently learning compact intermediate representations compatible with downstream policy learning, solely from a sequence of observations from the simulator. We then describe our second contribution, which involves the enhancement of the robustness and stability of the predictions during real-world evaluation."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "3.1",
|
| 25 |
+
"parent_section_id": "3",
|
| 26 |
+
"section_name": "III-A Perception model",
|
| 27 |
+
"text": "When training the perception model, we focus on 3 main principles. Firstly, , the embedding vector should always be consistent with the BEV reconstruction. Secondly, BEV images must be represented in a continuous latent space that has smooth temporal transitions to similar BEV images. Finally, the perception model must efficiently utilize an unlabelled sequence of images as an expert video portraying optimal behaviour. This would also allow for unsupervised training/fine-tuning of the model using real-world expert videos, which we leave for future work.\nThe perception model consists of a ResNet-50 [10 ###reference_bx10###] that is tasked with processing the observation obtained from an RGB camera, with the primary objective of comprehending the environmental context in which the robot operates, and compresses into a consistent intermediate representation, , which when decoded through a BEV decoder, outputs a BEV image . Our choice for BEV observations is rooted in their capacity to convey the surrounding roadmaps with minimal information redundancy. To learn such representations from a set of FPV and corresponding binary BEV images, prior methods [16 ###reference_bx16###] train a Variational Autoencoder (VAE) [13 ###reference_bx13###] to encode an RGB image that is decoded using , where is the batch size and is the embedding dimension. Given that we have batches (BEV reconstructions) and (ground-truth BEV observations), we could then optimize the following reconstruction loss :\nUsing the above loss, the VAE Encoder will learn to embed the FPV observations that will reconstruct their corresponding BEV observations , and being the corresponding BEV labels. Additionally, (Kullback Leibler) divergence forces the embeddings, to be within a Gaussian distribution of zero-mean and unit-covariance, that allows for smooth interpolation. The representations learnt by VAE would embed 2 FPV observations that are very similar, for example, 2 straight roads, but a have slight variation in the angle to be closer, than a straight road and an intersection. The following is the loss function used to train a VAE.\nAlthough, the above ELBO loss would allow the model to learn appropriate representations for understanding the observation, these representations do not capture the temporal understanding of the task. Typically, representations for robotics embed observations in such a way to make it easier for the policy to learn the behaviour of an objective quickly and efficiently. One of the earliest methods for self-supervised learning, Time-Contrastive Networks [24 ###reference_bx24###] disambiguates temporal changes by embedding representations closer in time, closer in the embedding space and farther otherwise by optimizing the following loss function.\nIn the above function, , and are a batch of embeddings corresponding to anchors, positives and negatives and is the similarity metric of the embeddings from the encoder . For a given single observation sample , the embedding obtained as an anchor , we uniformly sample a frame within a temporal distance threshold to obtain at timestep and , anywhere from to the end of the episode. However, recently [18 ###reference_bx18###] has shown that in-domain embeddings learnt by TCN are discontinuous, leading to sub-optimal policies. To alleviate this problem, we also add the reconstruction loss that enhances the stability of the training process, and helps learn better representations. To achieve the FPV-BEV translation using our method, we optimize the model parameters using the following contrastive with reconstruction loss for image encoding.\nIn the above loss function, balances the reconstruction with the contrastive loss, since the model optimizes the reconstruction loss slower than the contrastive loss. Using the above loss function, the model learns more temporally continuous and smoother embeddings as it constrains the proximity of the embeddings not only using the contrastive learning loss but also based on the BEV reconstructions."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3.2",
|
| 31 |
+
"parent_section_id": "3",
|
| 32 |
+
"section_name": "III-B Temporal model with Robustness modules",
|
| 33 |
+
"text": "To enhance the robustness of the perception model and transfer it to the real world setting, we implemented an additional model in the pipeline. Fig. 4 ###reference_### shows our proposed method of robustness enhancement. This involves the integration of an LSTM, functioning as a Memory model. The LSTM was trained on sequences gathered from sequences in the simulator. The primary outcome of this Memory model is to effectively infuse historical context into the prediction of , which forms a candidate of , and enhancing the robustness of the perception module when confronted with the unseen real-world data. To model the uncertainty in future states, we add an Mixture Density Network (MDN) on the top of LSTM output layer. The above pipeline is formulated as:\nwhere respectively denotes action, state prediction at the previous timestep, and historical hidden state at the time step . is the latent representation that is given as an input to the policy. We optimize M with the below loss function:\nwhere is, respectively, the training batch size, number of Gaussian models, Gaussian mixture weights with the constraint , and the probability of ground truth at time step conditioned on predicted mean and standard variance for Gaussian model .\n###figure_4### Nonetheless, it is noteworthy that that is obtained from the ResNet-50 may be slightly distinct from the latent distribution of BEV images when the perception model is applied to real-world observations , potentially impacting the performance of the LSTM and the policy. To mitigate this concern, we collected a dataset comprising of the BEV-based latent embeddings of 1439 FPV images which we define as the BEV anchors. In practice, upon obtaining the output vector from the ResNet-50, we measure its proximity to each , subsequently identifying the closest match. We replace with the identified anchor embedding , ensuring that both the LSTM and the policy consistently uses the pre-defined BEV data distribution. We pass as an input to the LSTM, along with the previous action to get the output . Again, we find the closest match for . We call this module Anchor State Checking (ASC):\n###figure_5### We also utilize the LSTM model for rejecting erroneous predictions by the ResNet-50, further enhancing the system\u2019s robustness against noise. If the processed prediction from the perception model is estimated with confidence score , obtained from either cosine-similarity or MSE, below a predefined threshold , we deliberately discard and opt for . In such instances, we resort to the output of the LSTM at the previous time-step. This module is known as Temporal State Checking (TSC):\nApart from adding robustness to the system using TSC, the utilization of the Memory model also serves as the crucial purpose of performing interpolation for the robots state in instances where actual observations are delayed, ensuring the continuity and reliability of the entire system. There is often a notable discrepancy in the update frequencies between control signals and camera frames, since control signals often exhibit a significantly higher update rate (50Hz) compared to the incoming stream of camera frames (15Hz). Values mentioned in brackets is in regards to our setup. This is also beneficial in the case of the recent large vision-language models like RT-X [6 ###reference_bx6###] that could solve many robotic tasks, but with a caveat of operating at a lower frequency, typically around 5Hz.\n###figure_6###"
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "4",
|
| 37 |
+
"parent_section_id": null,
|
| 38 |
+
"section_name": "IV Experimental platform and setup",
|
| 39 |
+
"text": "To leverage the extensive prior knowledge embedded in a pre-trained model, we opt to train a ResNet-50 [10 ###reference_bx10###] model after initializing with ImageNet pre-trained weights on a large-scale dataset containing FPV-BEV image pairs captured in the simulator. We collected the train dataset from the CARLA simulator to train both the Perception and the Memory model. Along with that, we also collected the validation and the test datasets from 2 different real-world sources. Following are the details on the collected datasets.\n###figure_7###"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "4.1",
|
| 43 |
+
"parent_section_id": "4",
|
| 44 |
+
"section_name": "IV-A Experimental platform",
|
| 45 |
+
"text": "For evaluating Zero-shot Sim2Real transfer, we built a hardware apparatus which is a Non-Holonomic, Differential-drive robot (Beobotv3) for the task of visual navigation. Our system is implemented using the ROS (Robotic Operating System) middleware and uses a Coral EdgeTPU, which is an ASIC chip designed to run CNN models for edge computing for all the compute. We used this Edge-TPU to run the forward inference of the ResNet-50 through a ROS nodes.\nThe CARLA simulator had been primarily tailored to self-driving applications, that use Ackermann steering, we further developed an existing differential drive setup using Schoomatic [17 ###reference_bx17###] and upgraded the CARLA simulator. We find this necessary because our real-world hardware system is based on differential-drive and to enable seamless transfer without any Sim2Real gap in the control pipeline, both the control systems need to have similar dynamics. In response to this limitation, Luttkus [17 ###reference_bx17###] designed a model for the integration of a differential-drive robot into the CARLA environment. Building upon their work, we undertook the development of a version of CARLA simulator catering to differential-drive robots for reinforcement learning, subsequently migrating it into the newly introduced CARLA 0.9.13."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "4.2",
|
| 49 |
+
"parent_section_id": "4",
|
| 50 |
+
"section_name": "IV-B Data collection",
|
| 51 |
+
"text": ""
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "4.2.1",
|
| 55 |
+
"parent_section_id": "4.2",
|
| 56 |
+
"section_name": "IV-B1 Train dataset from CARLA simulator",
|
| 57 |
+
"text": "Within the CARLA simulator, we have access to the global waypoints along various trajectories. To allow more diversity, we randomly sampled a range of different orientations and locations. Leveraging this setup, we facilitated the generation of a large dataset of FPV-BEV images. We augmented the simulator\u2019s realism by introducing weather randomization and non-player traffic into the simulated environment."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4.2.2",
|
| 61 |
+
"parent_section_id": "4.2",
|
| 62 |
+
"section_name": "IV-B2 Validation dataset from Google Street View",
|
| 63 |
+
"text": "Using the Google Street View API, we obtained all the panoramic images from various locations on the USC campus. The panoramic images were segmented with a Horizontal Field of View (FoV) of 90 degrees and are manually segregated into different 6 different classes as shown in Fig. 5 ###reference_###. The validation dataset does not have any temporal sequencing and is primarily focused on having a broader and more uniform data distribution across all the classes. Due to these reasons, this dataset becomes an optimal choice for evaluating the perception model."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "4.2.3",
|
| 67 |
+
"parent_section_id": "4.2",
|
| 68 |
+
"section_name": "IV-B3 Test dataset from Beobotv3",
|
| 69 |
+
"text": "To evaluate the quality of representations estimated by the entire system, we record a video sequence using a mobile robot. More precisely, we recorded a set of 5 ROSBag sequences at different locations of the USC campus. Later, we labelled all the frames in a ROSBag sequence, similar to the above paragraph. However, unlike the validation set, the test dataset has temporal continuity, which helps us judge the entire navigation system.\n###figure_8###"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "5",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "Evaluation and Results",
|
| 75 |
+
"text": "Through our experiments, we aim to answer the following questions in regards to our proposed method.\nHow good are the representations obtained from the pretrained model for learning to navigate using online RL?\nHow well can we plan using the BEV reconstructions from the pretrained model?\nDoes contrastive learning help learning good representations compared to an auxiliary task?\nWhat are the performance benefits by adding ASC, TSC, and both?\nHow efficient and optimal is the navigation system when transferred to the Real-world setup?\nPolicy Learning. We performed RL experiments by deploying the frozen pretrained encoder and training a 1-layer policy in the CARLA simulator Fig. 6 ###reference_###. The task for the agent is to navigate to a goal destination using an RGB image (, ). We accomplished this by training a policy employing the PPO algorithm [23 ###reference_bx23###]. The design of the reward function is rooted in proportionality to the number of waypoints the robot achieves to the designated goal point. In each timestep, the policy receives the current embedding of the observation concatenated with the directional vector pointing towards the waypoint tasked with producing a pair of (throttle, steer) values. We compared our method with VAE (reconstructing only the BEV image; Eqn. 2 ###reference_###), TCN (trained using Eqn. 3 ###reference_###), Random (Randomly initialized encoder and frozen), CLIP-RN50 [20 ###reference_bx20###]. Note that, many of the prior works [2 ###reference_bx2###, 5 ###reference_bx5###] have shown that randomly initialized and frozen encoders do learn decent features from an observation.\nPlanning We use TEB planner [22 ###reference_bx22###] to compute the action using an occupancy map (BEV reconstruction) to perform a task. Typically, occupancy map-based planners like TEB, use LiDAR data to compute the map of the environment and estimate a plan to perform the task, but in our case, we reconstruct the occupancy map using embedding obtained from RGB inputs. These maps are straightforward to compute in the case of our method and the VAE baseline, since these methods use a decoder. For the other baselines like the Random, CLIP and TCN encoder, we freeze the encoder and train the decoder to upsample the embeddings to estimate the BEV reconstruction. The results obtained for the planning task are shown in Fig. 6 ###reference_### as dotted lines.\nQuantitative Analysis We evaluated the performance of our ResNet-50 model using the Validation dataset and the results are shown in Table 5 ###reference_###. The performance of our perception model on both simulation and real-world dataset are compared to the baseline, which is a 6-way ResNet-50 classifier. Our perception model identifies the closest matching class for the output embedding. The baseline is a ResNet-50 model trained on a 6-class training dataset comprising 140,213 labelled FPV images. This proves that contrastive learning using BEV prediction enables better generalization, to out-of-domain data.\nAblation Experiments for state checking Following a similar approach, we used the Test dataset to evaluate the entire system. Apart from the accuracy also used Cross entropy (CE) and Mean Square error (MSE) to judge the quality of reconstructions by the LSTM model. These results are shown in Table 7 ###reference_###. Similar to the above experiments, we also used data from the unseen Town from the CARLA simulator to asses the predictions of our system. The metrics presented in this table exhibit a slight decrease compared to Table 5 ###reference_###. This can be attributed to the increased presence of abnormal observations and higher ambiguity between classes within the time-series data obtained from the robot, as opposed to the manually collected and labelled dataset in the validation dataset.\nEvaluation on a Real-world system We perform experiments on a Real-world robot, where the agent is tasked with navigating to a given destination location, using the pretrained ResNet encoder and the trained policy in the Carla simulator. Success rates (SR) for planning experiments for our model are shown in Fig. 5 ###reference_###. For both policy learning and planning, we specify the computation costs in Table. 8 ###reference_###."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "6",
|
| 79 |
+
"parent_section_id": null,
|
| 80 |
+
"section_name": "VI Discussion and Future work",
|
| 81 |
+
"text": "In this paper we proposed a robust navigation system that is trained entirely in a simulator and frozen when deployed. We learn compact embeddings of an RGB image for Visual Navigation that are aligned with temporally closer representations and reconstruct corresponding BEV images. By decoupling the perception model from the control model, we get the added advantage of being able to pretrain the encoder using a set of observation sequences irrespective of the robot dynamics. Our system also consists of a memory module that enhances the robustness of the navigation system and is trained on an offline dataset from the simulator. Although our experiments in this paper are limited to data obtained through the simulator, one of the primary advantages of our methods is the ability to use additional simulator/real-world FPV-BEV datasets by aggregating with the current dataset."
|
| 82 |
+
}
|
| 83 |
+
],
|
| 84 |
+
"appendix": [],
|
| 85 |
+
"tables": {},
|
| 86 |
+
"image_paths": {
|
| 87 |
+
"1": {
|
| 88 |
+
"figure_path": "2310.18847v2_figure_1.png",
|
| 89 |
+
"caption": "Figure 1: Overview of our system We first train the visual navigation system on a large-scale dataset collected in the simulator and deploy the frozen model in an unseen real-world environment.",
|
| 90 |
+
"url": "http://arxiv.org/html/2310.18847v2/extracted/5490079/figs/carlagstview.png"
|
| 91 |
+
},
|
| 92 |
+
"2": {
|
| 93 |
+
"figure_path": "2310.18847v2_figure_2.png",
|
| 94 |
+
"caption": "Figure 2: Training pipeline for the perception model. (a) During the training phase, the ResNet model is trained using a set of temporal sequences, consisting of pairs of input (FPV images, displacement and orientation to goal) and output (BEV images) from the simulator. Our contrastive loss embeds positive closer to anchor and negative farther away. (b) In the bottom, we pictorially show the input and the output that is used to train the memory module.",
|
| 95 |
+
"url": "http://arxiv.org/html/2310.18847v2/extracted/5490079/figs/iros_figure.png"
|
| 96 |
+
},
|
| 97 |
+
"3": {
|
| 98 |
+
"figure_path": "2310.18847v2_figure_3.png",
|
| 99 |
+
"caption": "Figure 3: Working of the System. RGB observation otsubscript\ud835\udc5c\ud835\udc61o_{t}italic_o start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT at time step t\ud835\udc61titalic_t is passed to the perception model (blue) that compresses it into an embedding ztsubscript\ud835\udc67\ud835\udc61z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. The memory model takes the current latent representation ztsubscript\ud835\udc67\ud835\udc61z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and uses the historical context to refine the state into z^tsubscript^\ud835\udc67\ud835\udc61\\hat{z}_{t}over^ start_ARG italic_z end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. These embeddings could either be used to train a control policy (orange) or to reconstruct the Bird\u2019s Eye View (BEV) for planning (grey). Both utilities result in an action command atsubscript\ud835\udc4e\ud835\udc61a_{t}italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT.",
|
| 100 |
+
"url": "http://arxiv.org/html/2310.18847v2/extracted/5490079/figs/sim2real_block.png"
|
| 101 |
+
},
|
| 102 |
+
"4": {
|
| 103 |
+
"figure_path": "2310.18847v2_figure_4.png",
|
| 104 |
+
"caption": "Figure 4: Robustness enhancement using Memory module. TSC (red) only takes input from the representation ztsubscript\ud835\udc67\ud835\udc61z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT when it comes with a high confidence score. Otherwise, it takes the previous prediction by the LSTM z^t\u22121subscript^\ud835\udc67\ud835\udc611\\hat{z}_{t-1}over^ start_ARG italic_z end_ARG start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT as interpolation. ASC (green) improves the representation of the incoming observation by making it in-domain. The crosses above correspond to rejecting the precepts and using the model\u2019s state prediction as the current state.",
|
| 105 |
+
"url": "http://arxiv.org/html/2310.18847v2/x1.png"
|
| 106 |
+
},
|
| 107 |
+
"5": {
|
| 108 |
+
"figure_path": "2310.18847v2_figure_5.png",
|
| 109 |
+
"caption": "Figure 5: Out-of-domain and real-world evaluation We constructed two 6-class validation datasets: one from the simulator (first row in the table) and another from street-view data (second row). Each class corresponds to the BEV images shown above. We specify accuracies for each class. Along with that, we also specify the success rate (SR) of the agent, when the encoder is deployed for real-world visual navigation. Our method outperformed the ResNet classifier (baseline) on both the unseen simulation dataset, the real-world validation dataset and real-world navigation as shown above.",
|
| 110 |
+
"url": "http://arxiv.org/html/2310.18847v2/extracted/5490079/figs/table1.png"
|
| 111 |
+
},
|
| 112 |
+
"6": {
|
| 113 |
+
"figure_path": "2310.18847v2_figure_6.png",
|
| 114 |
+
"caption": "Figure 6: Policy learning and Planning experiments on navigation task using pretrained representations. Using a pretrained ResNet encoder, we compare our method with different baselines. The training curves are obtained when we train a 1-layer policy, using RL, that takes the embeddings from the frozen encoder. The x\ud835\udc65xitalic_x and y\ud835\udc66yitalic_y axis corresponds to iterations and the cumulative reward, with the shaed regions showing the 95% confidence intervals. We also perform planning experiments, where the BEV reconstructions are used to navigate to the goal, as shown the by the success rate (SR), through the dotted lines corresponding to each method.",
|
| 115 |
+
"url": "http://arxiv.org/html/2310.18847v2/extracted/5490079/figs/Town05_prtr.png"
|
| 116 |
+
},
|
| 117 |
+
"7": {
|
| 118 |
+
"figure_path": "2310.18847v2_figure_7.png",
|
| 119 |
+
"caption": "Figure 7: Ablation experiments on the Test Dataset. Each double-row corresponds to a data sequence. We demonstrate that our approach not only attains high ACC (accuracy), but also provides a more granular BEV representation compared to the naive classifier, as indicated by the MSE (Mean Squred Error) and CE (Cross-Entropy) metrics. In the upper portion of the table, we assessed our method independently of the LSTM on an unseen temporal sequence from the simulator, contrasting it with the baseline CNN classifier. In the lower portion, we compared the performance of system with and without LSTM on a real-world data sequence. Note that dashes in the table indicate the absence of a class in the respective sequence. We compute the mean values for each row as shown in the last column.",
|
| 120 |
+
"url": "http://arxiv.org/html/2310.18847v2/extracted/5490079/figs/table2.png"
|
| 121 |
+
},
|
| 122 |
+
"8": {
|
| 123 |
+
"figure_path": "2310.18847v2_figure_8.png",
|
| 124 |
+
"caption": "Figure 8: Comparison of runtime. Computation costs (runtime in milliseconds) of each module in the navigation system for policy learning and planning are shown above.",
|
| 125 |
+
"url": "http://arxiv.org/html/2310.18847v2/extracted/5490079/figs/table_flops.png"
|
| 126 |
+
}
|
| 127 |
+
},
|
| 128 |
+
"validation": true,
|
| 129 |
+
"references": [
|
| 130 |
+
{
|
| 131 |
+
"1": {
|
| 132 |
+
"title": "\u201cLearning Robust Control Policies for End-to-End Autonomous\nDriving From Data-Driven Simulation\u201d",
|
| 133 |
+
"author": "Alexander Amini et al.",
|
| 134 |
+
"venue": "In IEEE Robotics Autom. Lett. 5.2, 2020, pp. 1143\u20131150",
|
| 135 |
+
"url": null
|
| 136 |
+
}
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"2": {
|
| 140 |
+
"title": "\u201cUnsupervised State Representation Learning in Atari\u201d",
|
| 141 |
+
"author": "Ankesh Anand et al.",
|
| 142 |
+
"venue": "In Advances in Neural Information Processing Systems 32:\nAnnual Conference on Neural Information Processing Systems 2019, NeurIPS\n2019, December 8-14, 2019, Vancouver, BC, Canada, 2019, pp. 8766\u20138779",
|
| 143 |
+
"url": null
|
| 144 |
+
}
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"3": {
|
| 148 |
+
"title": "\u201cMeta Reinforcement Learning for Sim-to-real Domain\nAdaptation\u201d",
|
| 149 |
+
"author": "Karol Arndt, Murtaza Hazara, Ali Ghadirzadeh and Ville Kyrki",
|
| 150 |
+
"venue": "In CoRR abs/1909.12906, 2019",
|
| 151 |
+
"url": null
|
| 152 |
+
}
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"4": {
|
| 156 |
+
"title": "\u201cLearning to Drive from Simulation without Real World Labels\u201d",
|
| 157 |
+
"author": "Alex Bewley et al.",
|
| 158 |
+
"venue": "In International Conference on Robotics and Automation,\nICRA 2019, Montreal, QC, Canada, May 20-24, 2019",
|
| 159 |
+
"url": null
|
| 160 |
+
}
|
| 161 |
+
},
|
| 162 |
+
{
|
| 163 |
+
"5": {
|
| 164 |
+
"title": "\u201cLarge-Scale Study of Curiosity-Driven Learning\u201d",
|
| 165 |
+
"author": "Yuri Burda et al.",
|
| 166 |
+
"venue": "In 7th International Conference on Learning Representations,\nICLR 2019, New Orleans, LA, USA, May 6-9, 2019",
|
| 167 |
+
"url": null
|
| 168 |
+
}
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"6": {
|
| 172 |
+
"title": "\u201cOpen X-Embodiment: Robotic Learning Datasets and RT-X\nModels\u201d",
|
| 173 |
+
"author": "Open X.-Embodiment Collaboration",
|
| 174 |
+
"venue": "In CoRR abs/2310.08864, 2023",
|
| 175 |
+
"url": null
|
| 176 |
+
}
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"7": {
|
| 180 |
+
"title": "\u201cCARLA: An Open Urban Driving Simulator\u201d",
|
| 181 |
+
"author": "Alexey Dosovitskiy et al.",
|
| 182 |
+
"venue": "In 1st Annual Conference on Robot Learning, CoRL 2017,\nMountain View, California, USA, November 13-15, 2017, Proceedings 78, Proceedings of Machine Learning Research",
|
| 183 |
+
"url": null
|
| 184 |
+
}
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"8": {
|
| 188 |
+
"title": "\u201cLightweight Learner for Shared Knowledge Lifelong Learning\u201d",
|
| 189 |
+
"author": "Yunhao Ge et al.",
|
| 190 |
+
"venue": "In CoRR abs/2305.15591, 2023",
|
| 191 |
+
"url": null
|
| 192 |
+
}
|
| 193 |
+
},
|
| 194 |
+
{
|
| 195 |
+
"9": {
|
| 196 |
+
"title": "\u201cRecurrent World Models Facilitate Policy Evolution\u201d",
|
| 197 |
+
"author": "David Ha and J\u00fcrgen Schmidhuber",
|
| 198 |
+
"venue": "In Advances in Neural Information Processing Systems 31:\nAnnual Conference on Neural Information Processing Systems 2018, NeurIPS\n2018, 3-8 December 2018, Montr\u00e9al, Canada., 2018, pp. 2455\u20132467",
|
| 199 |
+
"url": null
|
| 200 |
+
}
|
| 201 |
+
},
|
| 202 |
+
{
|
| 203 |
+
"10": {
|
| 204 |
+
"title": "\u201cDeep Residual Learning for Image Recognition\u201d",
|
| 205 |
+
"author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun",
|
| 206 |
+
"venue": "In 2016 IEEE Conference on Computer Vision and Pattern\nRecognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016",
|
| 207 |
+
"url": null
|
| 208 |
+
}
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"11": {
|
| 212 |
+
"title": "\u201cAnalysis of Randomization Effects on Sim2Real Transfer in\nReinforcement Learning for Robotic Manipulation Tasks\u201d",
|
| 213 |
+
"author": "Josip Josifovski et al.",
|
| 214 |
+
"venue": "In IEEE/RSJ International Conference on Intelligent Robots\nand Systems, IROS 2022, Kyoto, Japan, October 23-27, 2022",
|
| 215 |
+
"url": null
|
| 216 |
+
}
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"12": {
|
| 220 |
+
"title": "\u201cSim2Real Transfer for Reinforcement Learning without\nDynamics Randomization\u201d",
|
| 221 |
+
"author": "Manuel Kaspar, Juan David Munoz Osorio and J\u00fcrgen Bock",
|
| 222 |
+
"venue": "In arXiv e-prints, 2020",
|
| 223 |
+
"url": null
|
| 224 |
+
}
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"13": {
|
| 228 |
+
"title": "\u201cAuto-Encoding Variational Bayes\u201d",
|
| 229 |
+
"author": "Diederik P. Kingma and Max Welling",
|
| 230 |
+
"venue": "In 2nd International Conference on Learning Representations,\nICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track\nProceedings, 2014",
|
| 231 |
+
"url": null
|
| 232 |
+
}
|
| 233 |
+
},
|
| 234 |
+
{
|
| 235 |
+
"14": {
|
| 236 |
+
"title": "\u201cShaped Policy Search for Evolutionary Strategies using\nWaypoints\u201d",
|
| 237 |
+
"author": "Kiran Lekkala and Laurent Itti",
|
| 238 |
+
"venue": "In IEEE International Conference on Robotics and\nAutomation, ICRA 2021, Xi\u2019an, China, May 30 - June 5, 2021",
|
| 239 |
+
"url": null
|
| 240 |
+
}
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"15": {
|
| 244 |
+
"title": "\u201cArtificial intelligence for precision movement robot\u201d",
|
| 245 |
+
"author": "Kiran Kumar Lekkala and Vinay Kumar Mittal",
|
| 246 |
+
"venue": "In 2015 2nd International Conference on Signal Processing\nand Integrated Networks (SPIN), 2015, pp. 378\u2013383",
|
| 247 |
+
"url": null
|
| 248 |
+
}
|
| 249 |
+
},
|
| 250 |
+
{
|
| 251 |
+
"16": {
|
| 252 |
+
"title": "\u201cMonocular Semantic Occupancy Grid Mapping With Convolutional\nVariational Encoder\u2013Decoder Networks\u201d",
|
| 253 |
+
"author": "Chenyang Lu, Marinus Jacobus Gerardus Molengraft and Gijs Dubbelman",
|
| 254 |
+
"venue": "In IEEE Robotics and Automation Letters 4.2",
|
| 255 |
+
"url": null
|
| 256 |
+
}
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"17": {
|
| 260 |
+
"title": "\u201cScoomatic: Simulation and Validation of a Semi-Autonomous\nIndividual Last-Mile Vehicle\u201d",
|
| 261 |
+
"author": "Lennart Luttkus, Peter Kr\u00f6nes and Lars Mikelsons",
|
| 262 |
+
"venue": "In Sechste IFToMM D-A-CH Konferenz 2020: 27./28. Februar\n2020, Campus Technik Lienz 2020, Feb. 21, 2020",
|
| 263 |
+
"url": null
|
| 264 |
+
}
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"18": {
|
| 268 |
+
"title": "\u201cVIP: Towards Universal Visual Reward and Representation via\nValue-Implicit Pre-Training\u201d",
|
| 269 |
+
"author": "Yecheng Jason Ma et al.",
|
| 270 |
+
"venue": "In The Eleventh International Conference on Learning\nRepresentations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023",
|
| 271 |
+
"url": null
|
| 272 |
+
}
|
| 273 |
+
},
|
| 274 |
+
{
|
| 275 |
+
"19": {
|
| 276 |
+
"title": "\u201cCross-View Semantic Segmentation for Sensing Surroundings\u201d",
|
| 277 |
+
"author": "Bowen Pan et al.",
|
| 278 |
+
"venue": "In IEEE Robotics and Automation Letters 5.3",
|
| 279 |
+
"url": null
|
| 280 |
+
}
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"20": {
|
| 284 |
+
"title": "\u201cLearning Transferable Visual Models From Natural Language\nSupervision\u201d",
|
| 285 |
+
"author": "Alec Radford et al.",
|
| 286 |
+
"venue": "In Proceedings of the 38th International Conference on\nMachine Learning, ICML 2021, 18-24 July 2021, Virtual Event 139, Proceedings of Machine Learning Research",
|
| 287 |
+
"url": null
|
| 288 |
+
}
|
| 289 |
+
},
|
| 290 |
+
{
|
| 291 |
+
"21": {
|
| 292 |
+
"title": "\u201cA Sim2Real Deep Learning Approach for the Transformation of\nImages from Multiple Vehicle-Mounted Cameras to a Semantically Segmented\nImage in Bird\u2019s Eye View\u201d",
|
| 293 |
+
"author": "Lennart Reiher, Bastian Lampe and Lutz Eckstein",
|
| 294 |
+
"venue": "In 23rd IEEE International Conference on Intelligent\nTransportation Systems, ITSC 2020, Rhodes, Greece, September 20-23, 2020",
|
| 295 |
+
"url": null
|
| 296 |
+
}
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"22": {
|
| 300 |
+
"title": "\u201cIntegrated online trajectory planning and optimization in\ndistinctive topologies\u201d",
|
| 301 |
+
"author": "Christoph R\u00f6smann, Frank Hoffmann and Torsten Bertram",
|
| 302 |
+
"venue": "In Robotics Auton. Syst. 88, 2017, pp. 142\u2013153",
|
| 303 |
+
"url": null
|
| 304 |
+
}
|
| 305 |
+
},
|
| 306 |
+
{
|
| 307 |
+
"23": {
|
| 308 |
+
"title": "\u201cProximal Policy Optimization Algorithms\u201d",
|
| 309 |
+
"author": "John Schulman et al.",
|
| 310 |
+
"venue": "In CoRR abs/1707.06347, 2017",
|
| 311 |
+
"url": null
|
| 312 |
+
}
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"24": {
|
| 316 |
+
"title": "\u201cTime-Contrastive Networks: Self-Supervised Learning from\nVideo\u201d",
|
| 317 |
+
"author": "Pierre Sermanet et al.",
|
| 318 |
+
"venue": "In 2018 IEEE International Conference on Robotics and\nAutomation, ICRA 2018, Brisbane, Australia, May 21-25, 2018",
|
| 319 |
+
"url": null
|
| 320 |
+
}
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"25": {
|
| 324 |
+
"title": "\u201cGeneSIS-Rt: Generating Synthetic Images for Training\nSecondary Real-World Tasks\u201d",
|
| 325 |
+
"author": "Gregory J. Stein and Nicholas Roy",
|
| 326 |
+
"venue": "In 2018 IEEE International Conference on Robotics and\nAutomation, ICRA 2018, Brisbane, Australia, May 21-25, 2018",
|
| 327 |
+
"url": null
|
| 328 |
+
}
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"26": {
|
| 332 |
+
"title": "\u201cDomain Randomization for Transferring Deep Neural Networks\nfrom Simulation to the Real World\u201d",
|
| 333 |
+
"author": "M. Tobin et al.",
|
| 334 |
+
"venue": "In 2017 IEEE/RSJ International Conference on Intelligent\nRobots and Systems (IROS), 2017, pp. 23\u201330",
|
| 335 |
+
"url": null
|
| 336 |
+
}
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"27": {
|
| 340 |
+
"title": "\u201cBi-Directional Domain Adaptation for Sim2Real Transfer of\nEmbodied Navigation Agents\u201d",
|
| 341 |
+
"author": "Joanne Truong, Sonia Chernova and Dhruv Batra",
|
| 342 |
+
"venue": "In IEEE Robotics and Automation Letters 6.2",
|
| 343 |
+
"url": null
|
| 344 |
+
}
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"28": {
|
| 348 |
+
"title": "\u201cVR-Goggles for Robots: Real-to-Sim Domain Adaptation for\nVisual Control\u201d",
|
| 349 |
+
"author": "Jingwei Zhang et al.",
|
| 350 |
+
"venue": "In IEEE Robotics Autom. Lett. 4.2, 2019, pp. 1148\u20131155",
|
| 351 |
+
"url": null
|
| 352 |
+
}
|
| 353 |
+
}
|
| 354 |
+
],
|
| 355 |
+
"url": "http://arxiv.org/html/2310.18847v2"
|
| 356 |
+
}
|
20240323/2311.03326v2.json
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Non-convex potential games for finding global solutions to sensor network localization",
|
| 3 |
+
"abstract": "Sensor network localization (SNL) problems require determining the physical coordinates of all sensors in a network. This process relies on the global coordinates of anchors and the available measurements between non-anchor and anchor nodes. Attributed to the intrinsic non-convexity, obtaining a globally optimal solution to SNL\nis challenging, as well as implementing corresponding algorithms. In this paper, we formulate a non-convex multi-player potential game for a generic SNL problem to investigate the identification condition of the global Nash equilibrium (NE) therein, where the global NE represents the global solution of SNL. We employ canonical duality theory to transform the non-convex game into a complementary dual problem. Then we develop a conjugation-based algorithm to compute the stationary points of the complementary dual problem. On this basis,\nwe show an identification condition of the global NE: the stationary point of the proposed algorithm satisfies a duality relation. Finally, simulation results are provided to validate the effectiveness of the theoretical results.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "INTRODUCTION",
|
| 9 |
+
"text": "Wireless sensor networks (WSNs), due to their capabilities of sensing, processing, and communication, have a wide range of applications [1 ###reference_b1###, 2 ###reference_b2###], such as target tracking and detection [3 ###reference_b3###, 4 ###reference_b4###], environment monitoring [5 ###reference_b5###], area exploration [6 ###reference_b6###], data collection and cooperative robot tasks [7 ###reference_b7###]. \nFor all of these applications, it is essential to determine the location of every sensor with the desired accuracy.\nEstimating locations of the sensor nodes based on measurements between neighboring nodes has attracted many research interests in recent years, see typical examples [8 ###reference_b8###, 9 ###reference_b9###].\n\nRange-based methods constitute a common inter-node measurement approach utilizing signal transmission based techniques such as\ntime of arrival, time-difference of arrival, and strength of received radio frequency signals [10 ###reference_b10###].\nDue to limited\ntransmission power, the measurements can only be obtained\nwithin a radio range. A pair of nodes are called neighbors if their distance is less than this radio range [11 ###reference_b11###].\nAlso, there are some anchor nodes whose global positions are known\n[12 ###reference_b12###]. Then a sensor network localization\n(SNL) problem is defined as follows: Given the positions of the anchor nodes of the WSN\nand the measurable information among each non-anchor node and its neighbors, find the positions of the rest of non-anchor nodes.\nTo better describe a WSN and each sensor\u2019s possible and ideal localization actions,\ngame theory is found useful in modeling WSNs and SNL problems\n[13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###]. The Nash equilibrium (NE) is a prominent concept in game theory, which characterizes a profile of stable strategies where rational sensor nodes would not choose to deviate from their location strategies [16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###]. Particularly,\npotential game is well-suited to model the strategic\nbehavior in SNL problems [13 ###reference_b13###, 19 ###reference_b19###].\nNote that the sensors need to consider the positioning accuracy of the whole WSN while ensuring their own positioning accuracy through the given information. The potential game framework can\nguarantee such an alignment between the individual sensor\u2019s profit and the global network\u2019s objective by characterizing a global unified potential function. In this way, it is natural and essential to seek a global NE of the whole sensor network rather than local NE and approximate solutions, since a global NE is equal to a global optimum of the potential function denoting the network\u2019s precise localization.\nNevertheless, non-convexity is an intrinsic challenge of SNL problems, which cannot be avoided by selecting modeling methods.\nIt is the status quo that finding the global optimum or equilibrium in non-convex SNL problems is still an open problem\n[20 ###reference_b20###, 21 ###reference_b21###, 11 ###reference_b11###]. The existing research methods for SNL problems mostly provide local or approximate solutions.\nSome relaxation methods such as semi-definite programming (SDP) [20 ###reference_b20###] and second-order cone programming [21 ###reference_b21###] are employed to transform the non-convex original problem into a convex optimization.\nThey ignore the non-convex constraints, yielding only approximate solutions. The alternating rank minimization (ARMA) algorithm [11 ###reference_b11###] has been considered to obtain an exact solution by mapping the rank constraints into complementary constraints. Nevertheless, this technique only guarantees the\nlocal convergence.\nIn this paper, we aim to seek global solutions for SNL problems.\nSpecifically,\nwe formulate a non-convex SNL potential game,\nwhere\nboth payoff function and potential function are characterized by continuous fourth-order polynomials. This formualtion\nenables us to avoid the non-smoothness in [13 ###reference_b13###, 19 ###reference_b19###], so as to effectively deal with the non-convex structures therein.\nWe reveal the existence and uniqueness of the global NE, which represents the global localization solution to SNL. Moreover, we employ the canonical duality\ntheory to transform the non-convex\ngame into a complementary dual problem and design a conjugation-based algorithm to compute the stationary points therein. Then,\nwe provide a sufficient\ncondition to identify the global NE: the stationary point to the proposed algorithm\nis the global NE if a duality relation is satisfied. Finally, we illustrate the\neffectiveness of\nour approach by numerical simulation results."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II Problem formulation",
|
| 15 |
+
"text": "In this section, we first introduce the range-based SNL problem of interest and then formulate it as a potential game.\nConsider a static sensor network\nin ( or 3) composed of\n anchor nodes\nwhose positions are known and non-anchor sensor nodes whose positions are\nunknown (usually ). Let a graph represent the sensing relationships between sensors, where is the sensor node set and is the edge set between sensors. Specifically,\n, where and correspond to the sets of non-anchor nodes and anchor nodes, respectively. Let for denote the actual position of the -th non-anchor\nnode, and for \ndenote the actual position of anchor node .\nFor a pair of sensor nodes and , their Euclidean distance\nis denoted as .\nEach sensor has the capability\nof sensing range measurements from other sensors within a\nfixed range , and define the edge set, i.e., there is an edge between two nodes if and only if either they are neighbors or they are both anchors.\nDenote as the neighbor set of non-anchor\nnodes with . Also, suppose that\nthe measurements are noise-free and all anchor positions , are accurate.\nHere we formulate the SNL problem as an -player SNL potential game ,\nwhere corresponds to the player set,\n is player \u2019s local feasible set, which is convex and compact, and is player \u2019s payoff function. In this context,\nwe map the position estimated by each non-anchor node as each player\u2019s strategy,\ni.e., the strategy of the player (non-anchor node) is the estimated position .\nDenote , as the position estimate strategy profile for all players, and as the position estimate strategy profile for all players except player .\nFor , the payoff function is constructed as\nwhere in measures the localization accuracy between node and its neighbor .\nThe individual objective of each non-anchor node is to ensure its position accuracy,\ni.e.,\nIn the SNL problem, each non-anchor node needs to consider the location accuracy of the whole sensor network while ensuring its own positioning accuracy through the given information.\nIn other words, each non-anchor node needs to guarantee consistency between its individual objective and collective objective.\nTo this end, by regarding the individual payoff as a marginal contribution to the whole network\u2019s collective objective [22 ###reference_b22###, 13 ###reference_b13###], we consider the following measurement of\nthe overall performance of sensor nodes\nHere, denotes the localization accuracy of node , which depends on the strategies of \u2019s neighbors, while denotes the localization accuracy of the entire network . Then we introduce the concept of potential game.\nA game is a potential\ngame if there exists a potential function such that, for ,\nfor every , and unilateral deviation .\nIt follows from Definition 1 ###reference_ef1### that\nany unilateral deviation from a strategy profile always results in the same change in both individual payoffs and a unified potential function. This indicates the alignment between each non-anchor node\u2019s selfish individual goal and the whole network\u2019s objective.\nThen we verify that in (2 ###reference_###) satisfies the potential function in Definition 1 ###reference_ef1###. See [24 ###reference_b24###, Appendix] for the proof.\nWith function in (2 ###reference_###) and payoffs for in (1 ###reference_###), the game is a potential game.\n\nMoreover, to attain an optimal value for , players need to engage in negotiations and alter their optimal strategies. The best-known concept that describes an acceptable result achieved by all players is the NE, whose definition is formulated below.\nA profile is said to be a Nash equilibrium (NE) of game (1 ###reference_###) if for any we have\n\nIt follows from Definition 1 ###reference_ef1### that\nan NE of a potential game ensures not only that each non-anchor node can adopt its optimal location strategy from the individual perspective, but also that the sensor network as a whole can achieve a precise localization from the global perspective.\nHere, we call NE as global NE due to the non-convex SNL formulation in this paper. This is different from the concept of local NE [25 ###reference_b25###, 26 ###reference_b26###], which only satisfies condition (4 ###reference_###) within a small neighborhood of for , rather than the whole . We also consider another mild but\nwell-known concept\nto help characterize the solutions to (1 ###reference_###).\nA strategy profile is said to be a Nash stationary point of (1 ###reference_###) if\nwhere is the normal cone at point on set .\nIt is not difficult to reveal that in non-convex games, if is a global NE, then it must be a NE stationary point, but not vice versa.\nNext, we show that global NE is unique and represents the actual position profile of all non-anchor nodes, which is equal to the global solution of the SNL.\nWe first consider an -dimensional representation of sensor network graph , which is a mapping of to the point\nformations , where is the row vector of the coordinates of\nthe -th node in and . In this paper, the is the actual position of sensor node .\nGiven the graph and an -dimensional representation of it, the pair is called a -dimensional framework. A framework is called generic111Some special configurations exist among the sensor positions, e.g., groups of sensors may be collinear. The reason for using the term generic is to highlight the need to exclude the problems arising from such configurations. if the set containing the coordinates of all its points is algebraically independent over the\nrationales [28 ###reference_b28###]. A framework is called rigid if there exists a sufficiently small positive constant such that if every framework \nsatisfies for and\n for every pair connected by an edge in , then\n holds for any node pair no matter there is an edge between them. Graph\n is called generically -rigid or simply rigid (in dimensions) if any generic framework\n is rigid. A framework is globally rigid if every framework satisfying\n for any node pair connected by an edge in and for any node pair that are not connected by a single edge. Graph\n is called generically globally rigid if\nany generic framework\n is\nglobally rigid [28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###].\nOn this basis, we make the following basic assumption.\nThe sensor topology graph is undirected and generically globally rigid.\nThe undirected graph topology is usually a common assumption in many graph-based approaches [31 ###reference_b31###, 4 ###reference_b4###].\nThe connectivity of can also be induced by some disk graph [11 ###reference_b11###], which ensures the validity of the information transmission between nodes.\nThe generic global rigidity of has been widely employed in SNL problems to guarantee the graph structure invariant, which indicates a unique localization of the sensor network [32 ###reference_b32###, 33 ###reference_b33###, 34 ###reference_b34###].\nBesides, there have been extensive discussions on graph rigidity in existing works [11 ###reference_b11###, 35 ###reference_b35###], but it is not the primary focus of our paper.\nThe following lemma reveals the existence and uniqueness of global NE . See [24 ###reference_b24###, Appendix] for the proof.\nUnder Assumption 1, the global NE of the potential game G is unique and corresponds to the actual position profile of all non-anchor nodes, which represents the global solution of the SNL.\nWhile we have obtained guarantees regarding the existence and uniqueness of global NE of the SNL problem, its identification and computation are still challenging since and are non-convex functions in our model.\nActually, as for convex games, most of the existing research works seek global NE via investigating first-order stationary points under Definition 3 [36 ###reference_b36###, 37 ###reference_b37###, 31 ###reference_b31###].\nHowever, in such a non-convex regime (2 ###reference_###),\none cannot expect to find a global NE easily following this way, because stationary points in non-convex settings are not equivalent to global NE anymore.\nSuch similar potential game models have also been considered in [13 ###reference_b13###, 19 ###reference_b19###].\nAs different from the use of the Euclidean norm in [13 ###reference_b13###, 19 ###reference_b19###], i.e., ,\nwe adopt the square of Euclidean norm to characterize and , i.e., . These functions endowed with continuous fourth-order polynomials enable us to avoid the non-smoothness and deal with the inherent non-convexity of SNL with useful technologies, so as to get the global NE.\nOn the other hand, previous efforts merely yield an approximate solution or a local NE by relaxing non-convex constraints or relying on additional convex assumptions, either under potential games or other modeling methods [32 ###reference_b32###, 11 ###reference_b11###].\nThus, they\nfail to adequately address the intrinsic\nnon-convexity of SNL.\nTo this end, we investigate the\nidentification condition of the global NE in the SNL problem. Specifically, we aim to find\nthe conditions that a stationary point of (1 ###reference_###) is consistent with the global NE and design an algorithm to solve it."
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "3",
|
| 19 |
+
"parent_section_id": null,
|
| 20 |
+
"section_name": "III Derivation of the global Nash equilibrium",
|
| 21 |
+
"text": "In this section, we explore\nthe identification condition of the global NE of the SNL problem by virtue of canonical dual theory and develop a conjugation-based algorithm to compute it.\nIt is hard to directly identify whether a stationary point is the global NE\non the non-convex potential function (2 ###reference_###). Here,\nwe employ canonical duality theory [38 ###reference_b38###] to transform (2 ###reference_###) into a complementary dual problem and investigate the relationship between a stationary point of the dual problem and the global NE of game (1 ###reference_###).\nCanonical transformation\nWe first reformulate (2 ###reference_###) in a canonical form.\nDefine .\nin\n(2 ###reference_###)\nand define the profiles\nHere,\n\nmap the decision variables in domain to the quadratic functions in space .\nMoreover, we introduce quadratic functions ,\nThus, the potential function (2 ###reference_###) can be rewritten as:\n\n Note that the gradients is a one-to-one mapping, where is the range space of the gradient. Thus, recalling [38 ###reference_b38###],\n is a convex differential\ncanonical function. \nThis indicates that\nthe following one-to-one duality relation is invertible on :\nDenote the profiles ,\nwhere is the total number of elements in the edge sets .\nBased on (8 ###reference_###), the Legendre conjugates of \ncan be uniquely defined by\nwhere is called the Legendre canonical duality pair\non .\nWe regard as a canonical dual variable on the dual space . Then, based on the canonical duality theory [38 ###reference_b38###], we define the following the complementary function ,\nSo far, we have transformed the non-convex function (2 ###reference_###) into a complementary dual problem (10 ###reference_###).\nWe have the following result about the equivalency relationship of stationary points between (10 ###reference_###) and (2 ###reference_###), whose proof is shown in Appendix A ###reference_###.\nFor a profile , if there exists such that for ,\n\nis a stationary point of complementary function , then is a Nash stationary point of game (1 ###reference_###).\nBy Theorem 1 ###reference_hm1###, the equivalency of stationary points between (10 ###reference_###) and (1 ###reference_###)\nis due to the fact that\nthe duality relations (8 ###reference_###) are unique and invertible on , thereby closing the duality gap between the non-convex original game and its canonical dual problem.\nSufficient feasible domain\nNext, we introduce a sufficient feasible domain for the introduced conjugate variable , in order to investigate the global optimality of the stationary points in (10 ###reference_###).\nConsider the second-order derivative\nof in . Due to the expression of (10 ###reference_###), we can find that is quadratic in . Thus, is -free, and is indeed a linear combination for the elements of .\nIn this view, we denote . On this basis, we introduce the following set of\nAlgorithm design\nThen, we design a conjugation-based algorithm to compute the stationary points of the SNL problem with the assisted complementary information (the Legendre conjugate of and the canonical conjugate variable ).\nIn Alg. 1, the terms about for and represent the directions of gradient descent and ascent according to . The terms about and are projection operators [39 ###reference_b39###].\nWhen , the positive semi-definiteness of implies that is convex with respect to . Besides, the convexity of derives that its Legendre conjugate is also convex [40 ###reference_b40###], implying that the complementary function is concave in . Together with the non-expansiveness of projection operators and a decaying step size , this convex-concave property of implies the convergence of Alg. 1 and enables us to identify the global NE.\nEquilibrium design\nOn this basis,\nwe establish the relationship between the global NE in (2 ###reference_###)\nand a stationary point computed from Alg. 1. The proof is shown in Appendix B ###reference_###.\nUnder Assumption 1, profile is the global NE of game if there exists such that a stationary point\n obtained from Alg. 1\nsatisfies\n\nThe result in Theorem 2 ###reference_hm2### reveals that\nonce the stationary point of Alg. 1 is obtained, we can check the duality relation ,\nso as to identify whether the solution of Alg. 1 is the global NE.\nIn fact, it is necessary to check the duality for the convergent point of Alg. 1, because the computation of is restricted on the sufficient domain instead of the original . In this view, the gradient of may fall into the normal cone instead of being equal to , thereby losing the one-to-one relationship with . Thus, may not be the global NE.\nIn addition, we cannot directly employ the standard Lagrange multiplier\nmethod and the associated Karush-Kuhn-Tucker (KKT) theory herein, because we need to first confirm a feasible domain of by utilizing\ncanonical duality information (referring to ). In other words, once the duality relation is verified, we can say that the convergent point of Alg. 1 is indeed the global NE of game (1 ###reference_###).\nWe summarize a road map for seeking global\nNE in this non-convex SNL problem for friendly comprehension. That is,\nonce\nthe problem is defined and formulated, we first transform the original SNL potential game into a dual complementary problem. Then we seek the stationary point of via algorithm iterations, wherein the dual variable \nis restricted on . Finally, after\nobtaining the stationary point by convergence, we\nidentify whether the convergent point satisfies the\nduality relation. If so, the\nconvergent point is the global NE."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "4",
|
| 25 |
+
"parent_section_id": null,
|
| 26 |
+
"section_name": "IV Numerical Experiments",
|
| 27 |
+
"text": "In this section,\nwe examine the effectiveness of our approach to seek the\nglobal NE of the SNL problem.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### We first consider a two-dimensional case based on the UJIIndoorLoc dataset. The UJIIndoorLoc dataset was introduced in 2014 at the International Conference on Indoor Positioning and Indoor Navigation, to estimate a user location based on building and floor. The dataset is available on the UC Irvine Machine Learning Repository website [41 ###reference_b41###]. We extract the latitude and longitude coordinates of part of the sensors and standardize the data by doing min-max normalization.\nWe employ Alg. 1 to solve this problem.\nSet the tolerance and the terminal criterion\nWe show the effectiveness of Alg. 1 for SNL problems with different node configurations.\nTake and different numbers of anchor nodes.\nFig. 1 ###reference_### shows the computed sensor location results in these cases. The anchor nodes and\nthe true locations of non-anchor nodes are shown by red\nstars and blue asterisks, and the computed locations are shown\nby green circles.\nWe can see that Alg. 1 can localize all sensors in either small or large sensor network sizes."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "5",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "Conclusion",
|
| 33 |
+
"text": "In this paper, we have focused on the non-convex SNL problems. We have presented novel results on the identification condition of the global solution and the position-seeking algorithms. By formulating a non-convex SNL potential game, we have shown that the global NE exists and is unique. Then based on the canonical duality theory, we have proposed a conjugation-based algorithm to compute the stationary point of a complementary dual problem, which actually induces the global\nNE if a duality relation can be checked. Finally, the computational efficiency\nof our algorithm has been\nillustrated by several experiments.\nIn the future, we may extend our current results to more complicated cases such as i) generalizing the algorithm to distributed situations, ii) generalizing the model to cases\nwith measurement noise, and iii) exploring milder graph conditions."
|
| 34 |
+
}
|
| 35 |
+
],
|
| 36 |
+
"appendix": [
|
| 37 |
+
{
|
| 38 |
+
"section_id": "Appendix 1",
|
| 39 |
+
"parent_section_id": null,
|
| 40 |
+
"section_name": "Appendix A Proof of Theorem 1",
|
| 41 |
+
"text": "If\nthere exists such that is a stationary point of ,\nthen it satisfies the first-order condition, that is\nMoreover,\nbased on the invertible one-to-one duality relation (8 ###reference_###), for given \nwith ,\nwe have\nfor . By employing this relation in (A ###reference_###b), we have \nwhich implies\n\nBy substituting with , we have\nAccording to the chain rule,\n\nTherefore, (13 ###reference_###) is equivalent to\nAccording to the definition of potential game, (14 ###reference_###) implies\nwhich yields the conclusion."
|
| 42 |
+
},
|
| 43 |
+
{
|
| 44 |
+
"section_id": "Appendix 2",
|
| 45 |
+
"parent_section_id": null,
|
| 46 |
+
"section_name": "Appendix B Proof of Theorem 2",
|
| 47 |
+
"text": "If\nthere exists such that the pair is a stationary point of Alg. 1,\nthen it satisfies the first-order condition with respect to , that is\nTogether with , we claim that\nthe canonical duality relation holds over . Thus, (B ###reference_###b) becomes \nThis indicates that\nthe stationary point \nof on is also a stationary point profile of on . Based on Theorem 1 ###reference_hm1###, we can further derive that the profile with respect to the stationary point of on is a Nash stationary point of game (1 ###reference_###).\nMoreover, recall \nwith .\nThis indicates that\n is convex in . Also, note that is concave in dual variable due to the convexity of .\nThus,\nwe can obtain the global optimality of on , that is, for and ,\nThe inequality relation above tells that\nThis confirms that is the global NE of (1 ###reference_###), which completes the proof."
|
| 48 |
+
}
|
| 49 |
+
],
|
| 50 |
+
"tables": {},
|
| 51 |
+
"image_paths": {
|
| 52 |
+
"1(a)": {
|
| 53 |
+
"figure_path": "2311.03326v2_figure_1(a).png",
|
| 54 |
+
"caption": "(a) N=10\ud835\udc4110N=10italic_N = 10\nFigure 1: Computed sensor location results with different configurations.",
|
| 55 |
+
"url": "http://arxiv.org/html/2311.03326v2/extracted/5490543/n10.png"
|
| 56 |
+
},
|
| 57 |
+
"1(b)": {
|
| 58 |
+
"figure_path": "2311.03326v2_figure_1(b).png",
|
| 59 |
+
"caption": "(b) N=20\ud835\udc4120N=20italic_N = 20\nFigure 1: Computed sensor location results with different configurations.",
|
| 60 |
+
"url": "http://arxiv.org/html/2311.03326v2/extracted/5490543/n20.png"
|
| 61 |
+
},
|
| 62 |
+
"1(c)": {
|
| 63 |
+
"figure_path": "2311.03326v2_figure_1(c).png",
|
| 64 |
+
"caption": "(c) N=35\ud835\udc4135N=35italic_N = 35\nFigure 1: Computed sensor location results with different configurations.",
|
| 65 |
+
"url": "http://arxiv.org/html/2311.03326v2/extracted/5490543/n35.png"
|
| 66 |
+
},
|
| 67 |
+
"1(d)": {
|
| 68 |
+
"figure_path": "2311.03326v2_figure_1(d).png",
|
| 69 |
+
"caption": "(d) N=50\ud835\udc4150N=50italic_N = 50\nFigure 1: Computed sensor location results with different configurations.",
|
| 70 |
+
"url": "http://arxiv.org/html/2311.03326v2/extracted/5490543/n50.png"
|
| 71 |
+
}
|
| 72 |
+
},
|
| 73 |
+
"validation": true,
|
| 74 |
+
"references": [],
|
| 75 |
+
"url": "http://arxiv.org/html/2311.03326v2"
|
| 76 |
+
}
|
20240323/2311.07954v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2311.10959v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2311.13231v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2311.15383v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2312.02923v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2312.06203v2.json
ADDED
|
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "Offloading and Quality Control for AI Generated Content Services in 6G Mobile Edge Computing Networks",
|
| 3 |
+
"abstract": "AI-Generated Content (AIGC), as a novel manner of providing Metaverse services in the forthcoming Internet paradigm, can resolve the obstacles of immersion requirements. Concurrently, edge computing, as an evolutionary paradigm of computing in communication systems, effectively augments real-time interactive services. In pursuit of enhancing the accessibility of AIGC services, the deployment of AIGC models (e.g., diffusion models) to edge servers and local devices has become a prevailing trend. Nevertheless, this approach faces constraints imposed by battery life and computational resources when tasks are offloaded to local devices, limiting the capacity to deliver high-quality content to users while adhering to stringent latency requirements. So there will be a tradeoff between the utility of AIGC models and offloading decisions in the edge computing paradigm. This paper proposes a joint optimization algorithm for offloading decisions, computation time, and diffusion steps of the diffusion models in the reverse diffusion stage. Moreover, we take the average error into consideration as the metric for evaluating the quality of the generated results. Experimental results conclusively demonstrate that the proposed algorithm achieves superior joint optimization performance compared to the baselines.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "With GPT models capturing the spotlight, Generative Artificial Intelligence (GAI), as a transformative field within the broader landscape of machine learning and artificial intelligence, has changed the way people interact with and understand the digital world [1 ###reference_b1###]. The demonstration of the capabilities inherent in Generative Artificial Intelligence (GAI) models is referred to as AI-generated content (AIGC). With the development of AIGC techniques, multiple AIGC models (e.g., diffusion models) can be employed to generate outputs with diverse forms, including text-to-speech, text-to-image, and image-to-image [2 ###reference_b2###]. Therefore, AIGC-as-a-Service (AaaS) architecture is proposed to offer the generated content, repair corrupted images, or alter inputted images, resulting in providing Metaverse users with immersive AIGC services.\nEdge computing, as a novel computing paradigm, not only solves the latency concerns of cloud computing but also improves the security and quality of the communication networks involved by integrating with other key technologies including AI, Blockchain, and digital twin[3 ###reference_b3###]. The convergence of the AIGC models and edge computing paradigm becomes the focus of the future research direction, especially in the Metaverse field[4 ###reference_b4###] which emphasizes the immersion of users.\n6G, as the successor of the current 5G wireless communication network, aims to make further improvements in reliability, speed, and security, far surpassing the capabilities of 5G. The integration of 6G with edge computing is expected to revolutionize network architectures and computing, offering seamless, ultra-reliable, low-latency communication coupled with powerful, localized computing capabilities. This convergence will enable new applications and services that require high data rates, massive connectivity, and ultra-reliable low-latency communications, resulting in enhanced mobile broadband (eMBB), ultra-reliable low-latency communications (URLLC), and massive machine type communications (mMTC).\nNonetheless, generic diffusion models (e.g., Open AI\u2019s DALL-E 2 [5 ###reference_b5###] and Google\u2019s Imagen [6 ###reference_b6###]) require substantial memory storage, so deploying large AIGC models is challenging due to the large volume of parameters [7 ###reference_b7###], which brings obstacles for real-time applications and devices with constrained computational resources. Though lightweight diffusion models (like the text-to-image model with 860M UNet and 123M text encoder proposed in [8 ###reference_b8###]) are employed on consumer mobile devices, the quality of generated content will also inevitably be affected. Thus, the convergence of diffusion models and edge computing systems has become the important direction of the future research area.\nChallenges: In light of the resource constraints inherent in mobile devices, only lightweight diffusion models can be deployed on the local devices, resulting in the locally generated content typically exhibiting a lower quality level. While larger models can be deployed on the edge server, it remains incapable of simultaneously managing all computational tasks while generating high-quality content. Consequently, the first challenge centers on the offloading decisions and quality level of the generated content. Secondly, different locations to process the computational task means the computation time is always different. As mobile users possess diverse and stringent latency requirements, how to balance the tradeoff between the offloading decisions and computation time is the second challenge. Furthermore, the subjective nature of evaluating the quality level of generated content necessitates the establishment of a mathematical correlation between quality levels and diffusion models, thus constituting the third challenge.\nRelated Work: Recently, multiple studies have reviewed the state-of-the-art research and development in diffusion models [9 ###reference_b9###, 10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###]. Notably, within the research of diffusion steps and image quality, [13 ###reference_b13###] focuses on the forward diffusion dynamics to bridge the gap between the ideal and the simulated by adopting smaller diffusion times. Furthermore, [15 ###reference_b15###] treats the number of reverse diffusion steps as a variable to find the optimal reverse diffusion step to balance the tradeoff between the image quality and the diffusion time. For the convergence of the AIGC models and edge computing paradigm, [16 ###reference_b16###] introduces the diffusion model-based AI-generated optimal decision (AGOD) algorithm to provide the optimal strategy for the selection of AIGC service providers (AGPs). Unlike the previous works, we propose a resource allocation scheme for the AGOD by correlating the image quality to the diffusion steps and taking the computation time and utility of AIGC models into consideration.\nContributions: The main contributions in this work are:\nWe are pioneering the implementation of the resource allocation for diffusion models in edge computing systems while guaranteeing the quality of the AIGC, which could enhance the performance and experience of the system and users respectively.\nWe quantify the quality level of AICG by taking the reverse diffusion steps within the diffusion process into consideration while considering the average error of computation results.\nThe proposed optimization algorithm jointly optimizes the tradeoff between the offloading decisions of the computational tasks and the utility of AIGC models."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "II System Model",
|
| 15 |
+
"text": "We consider a mobile edge computing communication system with mobile user equipments (UEs) and one edge server. Assume in this system that each UE requests to access AIGC services and content of corresponding quality level is generated at the chosen point based on the offloading decision and allocated reverse diffusion steps.\nAI-generated content model. To generalize the types of AIGC services (e.g. text-to-image, text-to-video, and image-to-image generation), we employ score-based diffusion models within our proposed system as the exemplar of AIGC models. Assume that there has been a standard sampling and diffusion process in the forward stage. AIGC services are then mainly provided by gradually reversing the diffusion process, step by step. Consequently, the quality of AIGC computation results is intricately related to the high quantity of reverse diffusion steps during the reverse diffusion stage.\nWhen the generation requests of AIGC services are evaluated by the monitor, different offloading decisions and diffusion strategies of computational tasks are allocated to the edge server or local UEs based on the current energy conditions of the system. We denote the allocated offloading decisions and reverse diffusion steps of computation tasks requested by UE in the reverse stage as the binary variable and discretization variable respectively. Specifically, indicates that UE \u2019s task will be processed locally on the mobile device, while indicates that the computation task will be offloaded to the edge server.\nFor the reverse diffusion step , as elucidated in the work by Du et al. [16 ###reference_b16###], it exhibits a positive correlation with the associated energy expenditure. Given the finite energy resources inherent in the practical system, it is thus noteworthy that is restricted as each step of the reverse diffusion process necessitates energy consumption, primarily associated with the execution of a neural network for Gaussian noise removal. Therefore, we transform the total energy limitation into the constraint of total supported reverse diffusion steps at different servers. To enhance the generality of the proposed algorithm, we denote the total supported reverse diffusion steps at the edge server as with the constraint illustrated as:\nwhere is the set of UEs. Furthermore, we denote as the total supported reversed diffusion steps at each local UE . In order to mitigate the potential for specific computation tasks to greedily consume computational resources on the edge server, a constraint is imposed wherein the maximum number of reverse diffusion steps allocated to each computation task is restricted. This constraint is denoted as the maximum reverse diffusion steps limit for each UE denoted by , where signifies the -th UE with . Therefore, another constraint considering the limitation of energy including UE and the edge server is introduced as , and after mathematical transformations, this constraint is expressed as:"
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "II-A Cost Functions",
|
| 21 |
+
"text": "Computation time: As per the findings reported in Page 6 of the paper [15 ###reference_b15###], it is elucidated that the overall computation time can be expressed as an affine function with respect to the reverse diffusion steps. Consequently, the cumulative computational delays for tasks with different offloading decisions are determined as:\nwhere and are designated as a constant temporal interval per individual step within the process at UE and the edge server. Given that the system is modeled for edge computing and the durations associated with transmitting the computed results through the downlink channels are negligible compared with the processing time of the edge server, it is pertinent to note that we shall exclude further consideration of downlink transmission time.\nAverage error of computation results:\nIn order to mitigate the influence of subjective factors on image quality and increase the alignment of the AIGC content with the users\u2019 request, we assess content quality by means of average error metrics. Building upon the Eq.(16) provided in the work [15 ###reference_b15###], we derive that the reverse conditional diffusion pathway exhibits exponential error reduction and proposed a modified version which could be defined as:\nwhere function is to determine the quality of processed content by adding Gaussian noise within the context of forward diffusion process, which can be modeled mathematically by a convex function related to the forward process , and , as the attenuation factor, represents the recovering ability of the AIGC model. When , it signifies that the\nforward diffusion has been sufficiently close to the unknown and simple noise distribution. Furthermore, when , there is no diffusion forward process and the average error converges to 0. Hence, function increases as the forward diffusion process increases.\nTotal energy consumption: Based on the relationship between CPU frequency (cycles/s) and data size (bits), and drawing from relevant work [17 ###reference_b17###], the energy consumption associated with different offloading modes can be formulated as: When , the energy consumption for UE , denoted as , can be expressed as:\nwhere represents the CPU frequency of UE , and is the coefficient reflecting the power efficiency of UE . Conversely, when , the energy consumption for UE , referred to as , can be represented as\nwith representing the allocated computing capacity at the edge server for the computational task requested by UE . Note that represents the analogous coefficient related to the power efficiency of the edge server.\nCost functions: Based on the previously delineated cost considerations, it can be deduced that, in the scenario where , the cost function denoted as is amenable to expression as . Conversely, in the case where , the cost function assumes the form of . Herein, , , and denote the weighting coefficients for each cost component."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "II-B Utility of Reverse Diffusion Steps",
|
| 27 |
+
"text": "We additionally contemplate the utility of reverse diffusion steps in the reverse diffusion process (i.e., the alignment of the generated content with the request). It is intuitive that the larger the reverse diffusion steps, the higher the diversity of content generated and the stronger the alignment with user requests. Consequently, the utility function should exhibit a non-decreasing relationship with respect to . Besides, there is a marginal effect on the generation of the content (i.e., the content generation approaches saturation as increases), so the function should be concave. Then the utility function is defined as follows which is provided in [18 ###reference_b18###] for edge system:\nwhere is the constant parameter. Variations in the parameter exert influence on the quality of the generated content, whereas diminished values of entail trade-offs in terms of energy consumption and utility. Therefore, it is imperative to establish a state of equilibrium."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "3",
|
| 31 |
+
"parent_section_id": null,
|
| 32 |
+
"section_name": "III Joint Optimization of Cost and Utility",
|
| 33 |
+
"text": "In this section, we build the original optimization problem and make it convex by introducing auxiliary variables and adopting the penalized joint policy."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "3.1",
|
| 37 |
+
"parent_section_id": "3",
|
| 38 |
+
"section_name": "III-A Problem Formulation",
|
| 39 |
+
"text": "We define and . In general, throughout this paper, for a vector , we denote its -th dimension by . A joint optimization problem, incorporating cost expenditure, offloading determinations, and utility is formulated as problem :\nwhere , , signifies the amalgamation of cost functions and utility functions, and , serve as weight parameters specifically designated to modulate the magnitudes of the cost and utility components. and are optimization variables. Constraint (9 ###reference_###) ensures the enforcement of binary offloading decisions. Constraint (1 ###reference_###) imposes an upper bound on the total reverse diffusion steps for the edge server, thus addressing the constraint on available energy resources. Additionally, in constraint (2 ###reference_###) serves as a mechanism for regulating the energy consumption of UE , while acts as a constraint to prevent the excessive energy consumption associated with multiple computation tasks on the edge server.\nThen, we derived that the objective function of is not jointly convex, as the binary value of the optimization variable\n. Therefore, the optimization problem is stuck to be solved. More importantly, the discrete values of the variable and the coupling of the decision variables in the constraint (1 ###reference_###) make this problem become an intractable NP-hard problem."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3.2",
|
| 43 |
+
"parent_section_id": "3",
|
| 44 |
+
"section_name": "III-B Transformation of the Non-Convex Problem",
|
| 45 |
+
"text": "Derived from the formulated objective function, it becomes evident that the terms associated with are amenable to extraction through standard factorization procedures. Henceforth, we employ a strategic approach involving the decoupling of the optimization variables, subsequently leading to the transformation of the initial problem into a set of sub-problems [19 ###reference_b19###] that can be addressed iteratively.\nHandling objective function: To tackle the coupling of optimization variables, surrogate functions of objective function can be constructed by introducing auxiliary variables [20 ###reference_b20###]. In the -th iteration, the surrogate function is expressed as:\nwhere and are the introduced auxiliary variables related to the convergence of our algorithm which will be analyzed in Section III-F ###reference_###. Function for UE is defined as:\nHandling constraint (1 ###reference_###): In order to decouple the constraint (1 ###reference_###), we adopt a similar approach by introducing an auxiliary variable to facilitate the formulation of the surrogate function for each individual sub-problem. Consequently, in the -th iteration, the constraint undergoes a transformation as follows:\nwhere constrains the variables in each loop and converges with iterations.\nNoting that the coupling among the optimization variables has been addressed, it is pertinent to acknowledge that the optimization sub-problems remain non-convex due to the discrete nature of the variables ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.3",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "III-C Successive Convex Approximation",
|
| 51 |
+
"text": "In this section, we focus on using the penalized joint policy and successive convex approximation (SCA) technique to handle the discrete variables .\nHandling constraint (9 ###reference_###): To solve the discrete variable and without loss of equivalence, it can be rewritten as:\nNote that the optimization problem has been transitioned into a continuous optimization problem, resulting in a notable reduction in computational complexity when contrasted with the direct resolution of the original discrete variable . Nonetheless, the function in constraint (13 ###reference_###) is a concave function. To further facilitate the solution, we adopt a method that introduces a penalty term for this concave constraint into the function , represented as:\nwhere is the penalty parameter with . Then the objective function becomes concave due to the concavity of the second term. Simultaneously, given the second term is differentiable, we utilize the first-order Taylor series to linearize it at each iteration. Specifically, at the -th iteration,we approximate with which is denoted as , where is defined as the optimal solution of the -th sub-problem. Consequently, the objective function is converted to:\nThen, the -th sub-problem in the -th iteration is transformed equivalently to :\nUntil now, the original optimization problem can be solved by using iteratively, and the process of solving the intra-sub-problem is listed in Algorithm 1 ###reference_###. Given introduced conditions and the objective function are convex, is convex. Thus, we can adopt Karush-Kuhn-Tucker (KKT) conditions of to obtain the optimal solutions."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.4",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "III-D KKT Conditions for Problem",
|
| 57 |
+
"text": "We first write down the Lagrange function of by introducing multipliers for the constraints:\nAfter applying KKT conditions, we get:\nStationarity:\nwhere relating to the variable .\nComplementary slackness:\nPrimal Feasibility: (16a ###reference_.1###), (16b ###reference_.2###), (16c ###reference_.3###).\nDual Feasibility:\nUnder the aforementioned conditions, we then proceed to seek the optimal solutions by analyzing the KKT conditions and employing the proposed algorithm which is listed in Algorithm 2 ###reference_###.\nTheorem 1: The optimal solution of the proposed objective function can be obtained by Algorithm 2 ###reference_### and is expressed as:\nwhere . More specifically, meets the condition (III-D ###reference_17###a) with and satisfies condition (19 ###reference_###d) with .\nProof: Please see Appendix References ###reference_###."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.5",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "III-E Inter-Sub-Problem Algorithm",
|
| 63 |
+
"text": "After completing the analysis of KKT conditions within each sub-problem, we obtain the optimal solution for each sub-problem. In this section, we introduce the inter-sub-problem algorithm, denoted as Algorithm 3 ###reference_###, which leverages iterative implementations of the SCA method. Its primary objective is the determination of the globally optimal solution for the original optimization problem ."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.6",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "III-F Time complexity, Solution Quality and Convergence",
|
| 69 |
+
"text": "Time Complexity. Based on the algorithms listed, the complexity of Algorithm 3 ###reference_### lies in the step 3-8. Assuming is the number of iterations in Algorithm 3 ###reference_###, then the complexity of adopting to obtain the sub-problem in step 4 is denoted as , where derives from computing for each user in an iteration. In step 5, Algorithm 1 ###reference_### is mainly solved by Algorithm 2 ###reference_### to obtain the optimization solution of the intra-sub-problem, so the complexity of Algorithm 2 ###reference_### is first analyzed. The total complexity of Algorithm 2 ###reference_### is as the complexity of computing and are both separately. Let denote the number of iterations in Algorithm 1 ###reference_###. The complexity of the Algorithm 1 ###reference_### is thus as step 5 of Algorithm 1 ###reference_### also costs . Therefore, the overall complexity can be derived as .\nSolution quality and convergence. Algorithm 3 ###reference_### comprises SCA, Algorithm 1 ###reference_###, and Algorithm 2 ###reference_###. Though the SCA method adopted to transform to results in some loss of the optimality, the KKT analysis by introducing the auxiliary variables in Algorithm 1 ###reference_### and Algorithm 3 ###reference_### both are without loss of optimality. Thus Steps 3-8 of Algorithm 3 ###reference_### can guarantee to find the global optimal solutions for . The convergence of Algorithm 3 is also evident from the preceding analysis."
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "4",
|
| 73 |
+
"parent_section_id": null,
|
| 74 |
+
"section_name": "IV Experimental Results",
|
| 75 |
+
"text": "In this section, we evaluate the prior performance of our proposed algorithm. Firstly, we introduce the numerical parameters settings and then discuss experimental results."
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "4.1",
|
| 79 |
+
"parent_section_id": "4",
|
| 80 |
+
"section_name": "IV-A Parameter Settings",
|
| 81 |
+
"text": "In this experiment, we set the total number of users is 30. To generalize the experimental result without any distortion and error, the original value of the average error rate is set as 1, which implies the original content is complete Gaussian noise. Based on the resource limitation, the fixed discretization steps are and . The computing capacity and frequency of mobile user are set as GHz and GHz by default. The computation energy efficiency coefficient and is . For the penalty parameter , we set the value as ."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "4.2",
|
| 85 |
+
"parent_section_id": "4",
|
| 86 |
+
"section_name": "IV-B Performance when Adapting Weight Parameters",
|
| 87 |
+
"text": "###figure_1### ###figure_2### ###figure_3### ###figure_4### As the weight factors of cost functions have an effect on the optimization process, we make modifications to these weights to tailor the focus of the associated cost functions. To investigate the different costs in the system, we augment , , and to respectively heighten the sensitivity of computational time, average error rate, and energy consumption. To further investigate the outcomes of our proposed algorithm, we conduct experimental trials across the combinations of and subsequently compare them to a baseline: random initialization. This random initialization entails the arbitrary allocation of diffusion steps following the stochastic selection of offloading decisions. For the baseline, we assume that no preference exists in the algorithm, then the weight factor is equal. Results of Fig. 2 ###reference_### show the computation , average error of computation results and energy consumption under different maximum assignable diffusion step . From Fig. 2 ###reference_###(a), we can see the total consumption time is decreased as increase, which means more complex computational tasks are processed on the edge server and the fixed discretization step on the edge server is faster. As the energy consumption is also larger than the local devices, the energy consumption also increases with increasing reverse diffusion steps which is presented in Fig. 2 ###reference_###(c). To provide a more intuitive representation of the quality of AIGC, we adopt the average accuracy () instead of average error as shown in Fig. 2 ###reference_###(b). For the different weights lines, the red line (proposed equal weights) outperforms others comprehensively as no sacrifice exists in this scenario compared with other lines. Furthermore, we can see that all proposed methods outperform the baseline, demonstrating our method\u2019s superiority."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4.3",
|
| 91 |
+
"parent_section_id": "4",
|
| 92 |
+
"section_name": "IV-C Performance when Setting Different Diffusion Steps",
|
| 93 |
+
"text": "To conduct a comprehensive investigation into the performance of the proposed method, we consider diverse joint optimization scenarios when the system requires different . To emphasize the utility function, we set the parameters as . Three distinct experiments are undertaken, each characterized by different local reverse diffusion steps . As illustrated in Fig.2 ###reference_###, we can see that as increases, all three lines exhibit an initial decline followed by a stabilization phase upon reaching an optimal solution. This observation underscores that as the system benefits from a higher resource allocation for task processing, it tends to prioritize the augmentation of the utility function to achieve superior joint optimization performance."
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"section_id": "5",
|
| 97 |
+
"parent_section_id": null,
|
| 98 |
+
"section_name": "Conclusion",
|
| 99 |
+
"text": "In this paper, we propose one joint optimization algorithm that bridges the gap between AIGC models and edge computing while mitigating the constraints posed by resource limitations on devices. The presented algorithm offers an enhanced approach by simultaneously optimizing offloading decisions and the reverse diffusion steps of diffusion models, taking into account average error and energy consumption. Our analysis of experimental results demonstrates the effectiveness of the proposed algorithm in enhancing the system efficiency."
|
| 100 |
+
}
|
| 101 |
+
],
|
| 102 |
+
"appendix": [],
|
| 103 |
+
"tables": {},
|
| 104 |
+
"image_paths": {
|
| 105 |
+
"1(a)": {
|
| 106 |
+
"figure_path": "2312.06203v2_figure_1(a).png",
|
| 107 |
+
"caption": "Figure 1: Consumption under different maximum diffusion steps Semaxsuperscriptsubscript\ud835\udc46\ud835\udc52S_{e}^{\\max}italic_S start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_max end_POSTSUPERSCRIPT.",
|
| 108 |
+
"url": "http://arxiv.org/html/2312.06203v2/x1.png"
|
| 109 |
+
},
|
| 110 |
+
"1(b)": {
|
| 111 |
+
"figure_path": "2312.06203v2_figure_1(b).png",
|
| 112 |
+
"caption": "Figure 1: Consumption under different maximum diffusion steps Semaxsuperscriptsubscript\ud835\udc46\ud835\udc52S_{e}^{\\max}italic_S start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_max end_POSTSUPERSCRIPT.",
|
| 113 |
+
"url": "http://arxiv.org/html/2312.06203v2/x2.png"
|
| 114 |
+
},
|
| 115 |
+
"1(c)": {
|
| 116 |
+
"figure_path": "2312.06203v2_figure_1(c).png",
|
| 117 |
+
"caption": "Figure 1: Consumption under different maximum diffusion steps Semaxsuperscriptsubscript\ud835\udc46\ud835\udc52S_{e}^{\\max}italic_S start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_max end_POSTSUPERSCRIPT.",
|
| 118 |
+
"url": "http://arxiv.org/html/2312.06203v2/x3.png"
|
| 119 |
+
},
|
| 120 |
+
"1(d)": {
|
| 121 |
+
"figure_path": "2312.06203v2_figure_1(d).png",
|
| 122 |
+
"caption": "Figure 1: Consumption under different maximum diffusion steps Semaxsuperscriptsubscript\ud835\udc46\ud835\udc52S_{e}^{\\max}italic_S start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_max end_POSTSUPERSCRIPT.",
|
| 123 |
+
"url": "http://arxiv.org/html/2312.06203v2/x4.png"
|
| 124 |
+
}
|
| 125 |
+
},
|
| 126 |
+
"validation": true,
|
| 127 |
+
"references": [],
|
| 128 |
+
"url": "http://arxiv.org/html/2312.06203v2"
|
| 129 |
+
}
|
20240323/2312.07527v2.json
ADDED
|
@@ -0,0 +1,235 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "BaRDa: A Belief and Reasoning Dataset that Separates Factual Accuracy and Reasoning Ability",
|
| 3 |
+
"abstract": "While there are numerous benchmarks comparing the performance of modern\nlanguage models (LMs), end-task evaluations often conflate notions\nof factual accuracy (\u201ctruth\u201d) and reasoning ability (\u201crationality\u201d,\nor \u201chonesty\u201d in the sense of correctly reporting implications of beliefs).\nOur goal is a dataset that clearly distinguishes these two notions.\nOur approach is to leverage and extend a collection of human-annotated entailment trees,\nengineered to express both good and bad chains of reasoning, and using a mixture of true\nand false facts, in particular including counterfactual examples,\nto avoid belief bias (also known as the \u201ccontent effect\u201d).\nThe resulting dataset,\ncalled BaRDa, contains 3000 entailments (1787 valid, 1213 invalid),\nusing 6681 true and 2319 false statements.\nTesting on four GPT-series models, GPT3(curie)/GPT3(davinici)/3.5/4, we find\nfactual accuracy (truth) scores of 74.1/80.6/82.6/87.1 and\nreasoning accuracy scores of 63.1/78.0/71.8/79.2.\nThis shows the clear progression of models towards improved factual accuracy\nand entailment reasoning, and the dataset provides a new benchmark that more cleanly separates\nand quantifies these two notions.111BaRDa and our evaluations of GPT* are available at https://allenai.org/data/barda",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Our goal is to better quantify both the factual accuracy and entailment\nreasoning capabilities of modern language models. Although\nnumerous evaluation benchmarks exist for testing models, e.g.,\nthe HELM evaluation suite Liang et al. (2022 ###reference_b9###),\nthe EleutherAI LM evaluation harness Gao et al. (2021 ###reference_b6###),\nand the GPT4 Technical Report datasets OpenAI (2023 ###reference_b10###),\nthe notions of factual accuracy and reasoning ability are often\nconflated in end-task evaluations. To address this limitation, our\ngoal is a dataset that more clearly separates these two\nnotions. Our approach is to use a mixture of both good\nand bad reasoning chains, constructed using a mixture of\ncorrect and incorrect (counterfactual) statements about\nthe world.\nAs well as being useful in their own right, these two measures can\nbe seen as indirectly measuring the \u201ctruthfulness\u201d and \u201chonesty\u201d\nof an AI system, critical properties to verify if we are to depend\non such systems.\nUsing the definitions from Evans et al. (2021 ###reference_b5###), a \u201ctruthful\u201d\nAI system is one whose statements are factually correct, hence\nwe can measure this simply by measuring factual accuracy of\nits statements.\nSimilarly, an \u201chonest\u201d AI system is one that \u201cbelieves what\nit says\u201d Evans et al. (2021 ###reference_b5###), which we can operationalize as\nreporting correct implications of its beliefs. For example,\nif a system says p = \u201cbirds can fly\u201d, we would therefore expect it\nto also say \u201csparrows can fly\u201d, \u201ceagles can fly\u201d, etc.\nif it really believed p (modulo also believing sparrows are\nbirds, etc.). Conversely, if the system did not confirm such\nconsequences (behaves irrationally), it is somewhat meaningless\nto say the sytem \u201cbelieves\u201d p. This notion of belief aligns\nwith work in philosophy Schwitzgebel (2019 ###reference_b11###), where\nan agent can be said to believe p if it \u201cacts as if p was true\u201d.\nTo measure factual accuracy and reasoning accuracy,\nwe present BaRDa, a new Belief and Reasoning Dataset\nconsisting of 9000 statements, some true and some not, and 3000\nentailment-based reasoning steps, again some valid and some not,\nusing those statements. We first describe BaRDa,\nthen use it to measure the belief and reasoning capabilities\nof four GPT-series models. We find a clear progression in\nboth these capabilities for newer models, with the one\nexception that GPT3 (text-davinci-003) appears stronger\nat entailment reasoning than its successor GPT3.5 (gpt-3.5-turbo).\nWe offer BaRDa to the community as a new evaluation tool\nfor measuring performance of other models, both existing and future."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "BaRDa: The Belief and Reasoning Dataset",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Design",
|
| 21 |
+
"text": "BaRDa contains a set of sentence-level reasoning steps,\nor entailments, of the form:\nif and \u2026 and then\nwhere the and are English statements (sentences)\nexpressing a possible fact about the world. for example:\nif a magnet can pick up steel objects\nand a paperclip is made of steel\nthen a magnet can pick up paperclips\nStatements may be true or false, i.e., we do not constrain\nourselves to factually correct rules.\nWe also label the entailment itself as valid (true) or not using the standard (but\ninformal) definition of textual entailment Dagan et al. (2013 ###reference_b3###) as follows:\n {quoting}[vskip=0mm,leftmargin=2mm]\nif the premises were true, then a person would reasonably\nconclude that the hypothesis were also true.\n\n\nNote that the entailment may still be valid, even if the\nfacts are not, for example the following entailment is valid (true):\nif a magnet can pick up wooden objects\nand a pencil is made of wood\nthen a magnet can pick up pencils\nIn other words, our dataset includes counterfactual situations,\nallowing us to measure a model\u2019s reasoning ability\nindependent of factuality. This is important,\nas it prevents us conflating truth and reasoning in\nour measurements."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "Metrics",
|
| 27 |
+
"text": "All statements in the entailments (both the premises and hypotheses )\nhave gold labels as to whether they are true in the real world or not. To measure\nbelief accuracy, we report the percentage of times a model\nmakes a correct prediction of the gold factual label.\nIn addition, each entailment has a gold label indicating if the reasoning step\nitself is valid (independent of factuality). Again, to measure reasoning\naccuracy, we report the percentage of times a model\nmakes a correct prediction of the gold entailment label.\nAs an additional metric of interest, we also measure whether models are internally consistent in\ntheir beliefs. To measure consistency,\nwe follow Li et al. (2019 ###reference_b8###) and use the conditional\nconstraint violation () metric, defined as\nthe fraction of entailments whose premises are believed true,\nbut whose hypothesis is not. In other words, over all entailments of the form , is:\nwhere denotes that the model believes to be true (similarly for ). The numerator of thus captures the number of entailments that the model violates.\nThe denominator captures the number of applicable entailments.\nWe then report consistency, defined as:\nNote that self-consistency is an intrinsic metric, that does not rely on gold labels.\nRather, it measures how consistently a model\u2019s own internal beliefs cohere together, regardless\nof what those beliefs are.\n###table_1###"
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Entailment Types",
|
| 33 |
+
"text": "Given a gold-labeled entailment, along with gold labels on the correctness of the premises and hypothesis statements,\nwe can define four classes of entailments, also illustrated in Figure 2 ###reference_###:\nGood facts, good entailment (TT)\nGood facts, bad entailment (TF)\nBad facts, good entailment (FT)\nBad facts, bad entailment (FF)\nwhere \u201cbad facts\u201d indicates at least one statement (premise and/or hypothesis) is false in the real world,\nand a \u201cbad entailment\u201d is one where the conclusion does not reasonably follow from the conditions given.\nHaving examples in these different classes is useful, as it allows us to separate factual accuracy\nfrom reasoning accuracy. In particular, we noticed in earlier work that models have a bias to assume an\nentailment is likely valid if all the facts are valid. By including examples of type FT and TF,\nwe can test how well a model has avoided this bias."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "Data Collection",
|
| 39 |
+
"text": "BaRDa is built using three sources of entailment data:\nEntailmentBank: Dalvi et al. (2021 ###reference_b4###) A large dataset of multi-premise entailments, assembled into entailment trees,\njustifying why the correct answer for a set of multiple choice-questions (drawn from the ARC dataset Clark et al. (2018 ###reference_b1###)).\nFor our purposes here, we use just the top-level of the entailment trees, i.e., a single entailment concluding the correct\nanswer hypothesis from one or more premises. For all these entailments, both the facts and the reasoning are\nconsidered correct (gold labels are all true), i.e., all entailments are of type TT.\nEntailer + Crowdsourcing: For the wrong multiple-choice answers to questions in the same ARC dataset,\nwe also generate entailment rules for them. To do this, we use the Entailer model Tafjord et al. (2022 ###reference_b12###), an 11B T5-based model\nspecifically fine-tuned to generate entailment rules as best it can, even if the conclusion hypothesis\nis false (e.g., see line 4 in 2 ###reference_###). Because the hypothesis is false, there necessarily must be some error in\nthe generated entailment: either one or more of the premises is false, or the entailment itself is invalid, or both.\nThis data provides a source of negative examples of both facts and reasoning for BaRDa, as the entailments\nare of types TF and FF.\nTo assign gold labels for this data, we use crowdworkers (Amazon Mechanical Turk). Each fact and each overall entailment\nreceives three independent ratings as to whether it is true/false (for facts), or valid/invalid (for entailments),\nand then the gold label is taken as the majority vote.\nGPT3 Generated + Crowdsourcing: Finally we use GPT3 to generate entailment rules using few-shot prompting -\nthis is similar to the previous item, except using prompting rather than fine-tuning to generate\na set of entailing premises. (The prompt contains examples of the kinds of entailment we wish it\nto generate). For the hypotheses, we used the list of core science facts contained\nin the QASC dataset Khot et al. (2019 ###reference_b7###), all considered to be true (i.e., gold = true).\nTo assign truth values to the generated premises, and to the generated entailment relation itself,\nwe again used crowdworkers, using the same approach as previously. This data is a source of\nall four types (TT, TF, FT, and FF).\nWe sample from these different sources as follows:\n500 TT entailments from EntailmentBank (1 above)\n1000 TF and FF entailments (500 of each) generated by Entailer (2 above)\n1000 examples generated by GPT3 of all types (3 above)\n500 additional examples generated by GPT3 of type TF, to balance the dataset (3 above)\nTo obtain a dataset with the most reliable annotations, we sampled as follows:\nFor the first item (500 examples from EntailmentBank), sampling was essentially random (taking the first 500 entailment steps from the\nfirst 177 entailment trees in the dataset). As these were expert-constructed entailments, we\nassume their annotations have high reliability. For the remaining three items, i.e., those with crowdsourced annotations,\nwe selected entailments with maximal inter-annotator agrement. Note that BaRDa is thus not a random subset of the full\ndata available, but is deliberately biased towards the most reliably annotated parts to minimize noise/avoid\ncontroversial examples, and maximize its utility as a benchmark.\nThis is similar to how the early RTE datasets were constructed Dagan et al. (2005 ###reference_b2###).\nThe total number of entailments in each of the four types is shown in Table 3 ###reference_###.\nOf the 9000 statements in the entailments (premises and hypothesis), 6681 are labeled true in the\nworld, and 2319 are labeled false."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "3",
|
| 43 |
+
"parent_section_id": null,
|
| 44 |
+
"section_name": "Experiments",
|
| 45 |
+
"text": ""
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "3.1",
|
| 49 |
+
"parent_section_id": "3",
|
| 50 |
+
"section_name": "Models",
|
| 51 |
+
"text": "We tested four models from the GPT* family on our dataset:\nGPT3c (text-curie-001): GPT3 curie,, a smaller (6.7B parameter) version of GPT3.\nGPT3 (text-davinci-003): The full version of GPT3.\nGPT3.5 (gpt-3.5-turbo): The API version of ChatGPT available from OpenAI.\nGPT4 (gpt-4): The most recent of the GPT* series."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3.2",
|
| 55 |
+
"parent_section_id": "3",
|
| 56 |
+
"section_name": "Prompting for Factual and Reasoning Correctness",
|
| 57 |
+
"text": "To elicit GPT*\u2019s answers about whether a statement is true (factual accuracy),\nand whether an entailment is valid (reasoning accuracy),\nwe use few-shot prompting to pose the statement/entailment to the model.\nThe prompts consist of examples, then the actual question (Is X true? Does P entail H?).\nThe generated result is then mapped to a yes/no answer, by searching for \u201cyes\u201d or \u201cno\u201d\nin the returned answer (typically the answer is exactly one of \u201cyes\u201d or \u201cno\u201d, as\nspecified in the prompt itself). The actual prompts used are shown in Appendix A ###reference_###."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "3.3",
|
| 61 |
+
"parent_section_id": "3",
|
| 62 |
+
"section_name": "Consistency",
|
| 63 |
+
"text": "Unlike factual and reasoning correctness, consistency is a property internal\nto a model (hence no gold labels required). As described in Section 2.2 ###reference_###,\nwe first count the number of entailments that the model believes are valid and\nwhere the model also believes all the premises are correct. In principle,\nif the model is reasoning fully consistently, it should therefore believe all the concluding\nhypotheses are valid. To measure consistency we measure the proportion that it\nactually considers correct (Section 2.2 ###reference_###)."
|
| 64 |
+
},
|
| 65 |
+
{
|
| 66 |
+
"section_id": "3.4",
|
| 67 |
+
"parent_section_id": "3",
|
| 68 |
+
"section_name": "Results",
|
| 69 |
+
"text": "with one exception: GPT3 (text-davinci-003)\nappears better able to recognize valid entailments than GPT3.5 (gpt-3.5-turbo).\nAgain, similar to factual accuracy, the smaller models make obvious reasoning errors.\nTable 6 ###reference_###) shows reasoning accuracies broken down\nby inference type (true/false facts, valid/invalid entailments), and illustates\nthat GPT4c is highly biased to scoring all entailments as valid, regardless\nof their gold label. For example, GPT4c labels the following invalid entailment\nas valid:\n\n\nif Galaxies are celestial bodies.\nand Stars are celestial bodies.\nthen Galaxies have stars.\nValid inference? Gold: F. GPT3c: T"
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"section_id": "3.4.1",
|
| 73 |
+
"parent_section_id": "3.4",
|
| 74 |
+
"section_name": "3.4.1 Factual and Reasoning Accuracies",
|
| 75 |
+
"text": "Table 4 ###reference_### shows the factual and reasoning accuracies of the four models on BaRDa.\nIn addition, Table 5 ###reference_### shows factual accuracies on just\nthe subset of BaRDa where factual correctness was (a) crowdsourced (rather than just\nassumed true, e.g., in the EntailmentBank facts) and (b) crowdworkers unanimously\nmarked the statements as correct.\nAs expected, larger models have higher factual accuracy, reaching up to 87% (GPT4)\non this dataset, or up to 91.9% for the subset with unanimous crowdworker labels (Table 5 ###reference_###).\nThe smallest model, GPT3c, makes obvious factual errors, e.g.,:\n\u201cFrozen water is solid water.\u201d gold: T, GPT3c: F\n\u201cThe Dodo was flightless.\u201d gold: T, GPT3c: F\n\u201cthe moon revolves around the sun\u201d gold: F, GPT3c: T\n\u201cAll solids float on water.\u201d gold: F, GPT3c: T\nThe largest model, GPT4, also makes some factual errors, e.g.,\n\u201cfish have been on earth for 300000000 years\u201d gold: F, GPT4: T\n\u201cNut is a kind of food.\u201d gold: T, GPT4: F\n\u201cHumans have hearts.\u201d gold: T, GPT4: F\nIn addition, some of the GPT4 errors are due to ambiguity, vagueness, or subjectivity in the statements themselves (Section 3.5 ###reference_###), e.g.,:\n\u201cIf you lose weight, you will be happier.\u201d gold: T, GPT4: F\n\u201csoil does not contain energy\u201d gold: T, GPT4: F\n\u201ca tornado dries out plants\u201d gold: F, GPT4: T\nwith one exception: GPT3 (text-davinci-003)\nappears better able to recognize valid entailments than GPT3.5 (gpt-3.5-turbo).\nAgain, similar to factual accuracy, the smaller models make obvious reasoning errors.\nTable 6 ###reference_### ###reference_###) shows reasoning accuracies broken down\nby inference type (true/false facts, valid/invalid entailments), and illustates\nthat GPT4c is highly biased to scoring all entailments as valid, regardless\nof their gold label. For example, GPT4c labels the following invalid entailment\nas valid:\n\n\nif Galaxies are celestial bodies.\nand Stars are celestial bodies.\nthen Galaxies have stars.\nValid inference? Gold: F. GPT3c: T"
|
| 76 |
+
},
|
| 77 |
+
{
|
| 78 |
+
"section_id": "3.4.2",
|
| 79 |
+
"parent_section_id": "3.4",
|
| 80 |
+
"section_name": "3.4.2 Consistency",
|
| 81 |
+
"text": "As an additional metric of interest,\nTable 7 ###reference_### shows the self-consistency within models.\nNote that consistency is an intrinsic property of the model (does not require gold labels).\nCare needs to be taken to interpret these results, as a model can be trivially\nself-consistent by labelling all facts as false, or all facts as true. Rather,\nself-consistency needs to also be balanced against factual and reasoning accuracy.\nThis appears to be the case for GPT3c (curie), which has high self-consistency\nbut likely due to a bias to label everything as T: In Table 7 ###reference_###, GPT3c labels 2598 of\nthe 3000 BaRDa entailments as having both true facts and valid entailments (i.e., type TT),\nwhile in practice only 1178 are in this category (Table 3 ###reference_###).\nSimilarly, GPT3 (davinci) slightly over-estimates the number of entailments in this TT category (as 1485).\nFor the remaining two models, GPT4 achieves higher self-consistency, as one might expect."
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"section_id": "3.5",
|
| 85 |
+
"parent_section_id": "3",
|
| 86 |
+
"section_name": "Analysis and Caveats",
|
| 87 |
+
"text": "These results are one of the first systematic comparisons of how different models\ncompare in both factual and reasonin accuracies. However, there are numerous caveats\nto bear in mind, and this work is best viewed as a first step in such comparative\nevaluations.\nFirst, we are only assessing factuality over a single class of statements, namely\nsimple, general, science-oriented statements, rather than encyclopedic statements\n(e.g., \u201cParis is the capital of France\u201d) or more complex statements (e.g.,\nmulti-sentence assertions).\nSimilarly, we are only assessing one type of reasoning, namely multi-premise textual entailments.\nWhile this is a general class, there are other classes not included in the dataset,\ne.g., arithmetic reasoning, probabailistic/judgemental reasoning, strict deductive reasoning.\nThird, despite our best efforts, the gold labels on both factuality and reasoning\nare necessarily noisy. The largest cause is sometimes present ambiguity in the\nstatements, either due to ambiguous context or word senses, e.g., \u201cA desk is usually short in length\u201d,\n\u201cAn iron nail has a higher conductivity than other materials.\u201d, or occasional\nlack of meaning, e.g., \u201cIce cream is left out of a freezer.\u201d. In addition,\nthe definition of \u201cvalid entailment\u201d is itself somewhat fuzzy, and sensible\nhumans will sometimes disagree on what constitutes a \u201creasonable\u201d inference,\ne.g., \u201cIf Plutonium is not fissile and Plutonium is radioactive then plutonium is dangerous.\u201d.\nFourth, as we are using few-shot prompting to convey the target tasks to\nthe models (Appendix A ###reference_###), the models\u2019 understanding of (hence performance on) the tasks\nwill only be as good as those prompts. It is possible with improved prompts and/or\nmore few-shot examples within them, model performance will change. (Note, though,\nthat we use the same prompts for all models, helping to keep comparative performances\nvalid)."
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"section_id": "4",
|
| 91 |
+
"parent_section_id": null,
|
| 92 |
+
"section_name": "Summary",
|
| 93 |
+
"text": "We have presented BaRDa, a new belief and reasoning dataset that clearly separates\nnotions of factual correctness (\u201ctruth\u201d) and reasoning accuracy (\u201crationality\u201d)\nfor evaluation purposes.\nTesting four GPT-series models, we observe a clear progression in\nboth these capabilities for newer models, with the one surprising\nexception being that GPT3 (text-davinci-003) appears stronger\nat entailment reasoning than its successor GPT3.5 (gpt-3.5-turbo).\nWe offer BaRDa to the community as a new evaluation tool\nfor measuring performance of models."
|
| 94 |
+
}
|
| 95 |
+
],
|
| 96 |
+
"appendix": [
|
| 97 |
+
{
|
| 98 |
+
"section_id": "Appendix 1",
|
| 99 |
+
"parent_section_id": null,
|
| 100 |
+
"section_name": "Appendix A Few-Shot Prompts",
|
| 101 |
+
"text": "We here show the prompts used to elicit a factual correctness / reasoning correctness answer from the GPT* models tested.\nAnswer the following yes/no question with either \"yes\" or \"no\". Just give a single word answer. Do not give any explanation or justification. \n\nHere are some examples: \nIs it true that an ocean contains large bodies of water? yes \nIs it true that lightning is similar to a volcano erupting? no \nIs it true that a fox squirrel is a kind of animal? yes \nIs it true that a rainbow is a kind of electromagnetic discharge? no \nIs it true that the surface of the moon is made up of water? no \nIs it true that the surface of the moon is made up of gases? no \nIs it true that a bluebird is a kind of animal? yes \nIs it true that the moon \u2019s surface is made up of oceans? no \nIs it true that the opposite of negative impact is positive impact? yes \nIs it true that building a new highway through the area has a negative impact on the ecosystem? yes \n\nNow let\u2019s do some more! Remember, answer with just a single word, yes or no. \nIs it true that insert the statement to assess here\nIn the following exercise, I would like you to tell me if a line of reasoning is reasonable or not. \n\nI will give you some facts and a possible conclusion. Please tell me whether the conclusion reasonably follows from the facts I gave you. \nIf the conclusion does reasonably follow from the facts, then please answer \"yes\". \nIf the conclusion does not reasonably follow from the facts, then please answer \"no\". \n\nNote that some of the facts may be false, but I am only interested whether the conclusion would reasonably follow IF those facts were true. In other words, imagine a world in which the given facts are true. Would it be reasonable to draw the conclusion from those facts, if they were true? \n\nHere are some examples:\n\nIF Vegetables are plants. \nAND Cabbages are plants. \nTHEN Cabbages are vegetables. \nQ: Does the rule\u2019s conclusion reasonably follow from the facts in the condition, if they were true? A: no\n\nIF a nail is made of metal \nAND metals conduct electricity \nTHEN a nail conducts electricity. \nQ: Does the rule\u2019s conclusion reasonably follow from the facts in the condition, if they were true? A: yes\n\nIF dogs are birds \nAND birds can fly \nTHEN dogs can fly \nQ: Does the rule\u2019s conclusion reasonably follow from the facts in the condition, if they were true? A: yes\n\nIF sound requires matter to travel \nAND a vacuum has no matter in it \nTHEN sound will not travel in a vacuum. \nQ: Does the rule\u2019s conclusion reasonably follow from the facts in the condition, if they were true? A: yes\n\nIF Erosion can cause a landslide. \nAND Mud is deposited by a landslide. \nTHEN Erosion can cause mud to be deposited. \nQ: Does the rule\u2019s conclusion reasonably follow from the facts in the condition, if they were true? A: yes\n\nIF An animal needs to breathe in order to live. \nAND Living things need water to live. \nTHEN Animals need water to live. \nQ: Does the rule\u2019s conclusion reasonably follow from the facts in the condition, if they were true? A: yes\n\nIF Frogs also have a larynx, or voice box, to make sounds. \nAND Animals that have vocal cords can make sounds. \nTHEN Frogs are animals. \nQ: Does the rule\u2019s conclusion reasonably follow from the facts in the condition, if they were true? A: no\n\nIF All humans breathe. \nAND Stones breathe. \nTHEN All humans and stones breathe. \nQ: Does the rule\u2019s conclusion reasonably follow from the facts in the condition, if they were true? A: yes\n\nIF If a planet is rocky, it can only have a thin atmosphere. \nAND Small planets and rocky planets have very thin atmospheres. \nTHEN If a planet is small and rocky, it has a thin atmosphere. \nQ: Does the rule\u2019s conclusion reasonably follow from the facts in the condition, if they were true? A: yes\n\nIF Damming a river can cause a lake to form. \nAND Dams are made of concrete. \nTHEN Dams are concrete lakes. \nQ: Does the rule\u2019s conclusion reasonably follow from the facts in the condition, if they were true? A: no\n\nNow your turn! insert the entailment to assess and the question here"
|
| 102 |
+
}
|
| 103 |
+
],
|
| 104 |
+
"tables": {
|
| 105 |
+
"1": {
|
| 106 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S0.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S0.T1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S0.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S0.T1.1.1.1.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.1.1.1.1\">Statements and Entailments</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S0.T1.1.1.1.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.1.1.2.1\">Gold</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S0.T1.1.1.1.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.1.1.3.1\">Model</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.2.2\">\n<td class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S0.T1.1.2.2.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.2.2.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.2.2.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.2.2.3.1\">(facts)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.2.2.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.2.2.4.1\">(ent.)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S0.T1.1.3.3.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"S0.T1.1.3.3.1.1\">// good facts, good entails:</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S0.T1.1.3.3.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S0.T1.1.3.3.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S0.T1.1.3.3.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.4.4.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1: a penny is made of copper</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.4.4.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.4.4.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.4.4.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">T</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.4.4.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.5.5.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P2: copper is magnetic</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.5.5.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.5.5.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.5.5.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">T</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.5.5.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.6.6.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">H: a penny is magnetic</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.6.6.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.6.6.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.6.6.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">T</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.6.6.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.7.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.7.7.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1 & P2 entails H</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.7.7.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.7.7.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.7.7.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.7.7.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">T</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.8.8.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"S0.T1.1.8.8.1.1\">// bad facts, good entails:</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.8.8.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.8.8.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.8.8.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.9.9.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1: a giraffe is a mammal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.9.9.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.9.9.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.9.9.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">T</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.9.9.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.10.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.10.10.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P2: mammals lay eggs</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.10.10.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.10.10.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.10.10.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">F</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.10.10.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.11.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.11.11.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">H: a giraffe lays eggs</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.11.11.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.11.11.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.11.11.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.11.11.3.1\" style=\"color:#FF0000;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.11.11.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.12.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.12.12.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1 & P2 entails H</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.12.12.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.12.12.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.12.12.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.12.12.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">T</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.13.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.13.13.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_italic\" id=\"S0.T1.1.13.13.1.1\">// bad facts, bad entails:</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.13.13.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.13.13.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.13.13.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.14.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.14.14.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1: Phobos is a moon</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.14.14.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.14.14.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.14.14.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">T</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.14.14.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.15.15\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.15.15.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P2: Moons orbit planets</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.15.15.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.15.15.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.15.15.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">T</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.15.15.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.16.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.16.16.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">H: Phobos orbits Mars</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.16.16.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.16.16.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.16.16.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">F</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.16.16.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.17.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.17.17.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1 & P2 entails H</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.17.17.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.17.17.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.17.17.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S0.T1.1.17.17.4\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.17.17.4.1\" style=\"color:#FF0000;\">T</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.18.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S0.T1.1.18.18.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.18.18.1.1\">Model score (truth)</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S0.T1.1.18.18.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S0.T1.1.18.18.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">8/9 = 89%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.19.19\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S0.T1.1.19.19.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.19.19.1.1\">Model score (reasoning)</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S0.T1.1.19.19.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" colspan=\"2\" id=\"S0.T1.1.19.19.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">2/3 = 66%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S0.T1.1.20.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S0.T1.1.20.20.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.1.20.20.1.1\">Model score (consistency)</span></td>\n<td class=\"ltx_td ltx_border_b ltx_border_r\" id=\"S0.T1.1.20.20.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" colspan=\"2\" id=\"S0.T1.1.20.20.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">1/2 = 50%</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Simplified examples of <span class=\"ltx_text ltx_font_smallcaps\" id=\"S0.T1.6.1\">BaRDa</span>\u2019s contents, along with\nillustrative model scores (not real) to illustrate scoring.\n<span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.7.2\">Truth</span> is accuracy of predicting the statements\u2019 (gold) truth values.\n<span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.8.3\">Reasoning</span> is accuracy of predicting the entailments\u2019 (gold) truth values.\n<span class=\"ltx_text ltx_font_bold\" id=\"S0.T1.9.4\">Consistency</span> is % of believed entailments with believed conditions\nand believed conclusions / % of believed entailments with believed conditions.\n</figcaption>\n</figure>",
|
| 107 |
+
"capture": "Table 1: Simplified examples of BaRDa\u2019s contents, along with\nillustrative model scores (not real) to illustrate scoring.\nTruth is accuracy of predicting the statements\u2019 (gold) truth values.\nReasoning is accuracy of predicting the entailments\u2019 (gold) truth values.\nConsistency is % of believed entailments with believed conditions\nand believed conclusions / % of believed entailments with believed conditions.\n"
|
| 108 |
+
},
|
| 109 |
+
"2": {
|
| 110 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S2.T2.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T2.1.1.1.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.1.1.1.1\">Statements and Entailments</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S2.T2.1.1.1.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.1.1.2.1\">Gold</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.2.2\">\n<td class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S2.T2.1.2.2.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.2.2.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.2.2.2.1\">(facts)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.2.2.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.2.2.3.1\">(ent.)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T2.1.3.3.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T2.1.3.3.1.1\">// Good facts, good entailment (\u201cTT\u201d):</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T2.1.3.3.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T2.1.3.3.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.4.4.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1: armor is made of metal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.4.4.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.4.4.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.4.4.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.5.5.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P2: metal conducts electricity</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.5.5.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.5.5.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.5.5.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.6.6.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">H: armor conducts electricity</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.6.6.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.6.6.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.6.6.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.7.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.7.7.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1 & P2 entails H</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.7.7.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.7.7.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.7.7.3.1\" style=\"color:#228B22;\">T</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T2.1.8.8.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T2.1.8.8.1.1\">// Good facts, bad entailment (\u201cTF\u201d):</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T2.1.8.8.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T2.1.8.8.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.9.9.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1: armor is made of metal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.9.9.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.9.9.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.9.9.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.10.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.10.10.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P2: metal conducts heat</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.10.10.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.10.10.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.10.10.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.11.11\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.11.11.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">H: armor conducts electricity</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.11.11.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.11.11.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.11.11.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.12.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.12.12.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1 & P2 entails H</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.12.12.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.12.12.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.12.12.3.1\" style=\"color:#228B22;\">F</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.13.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T2.1.13.13.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T2.1.13.13.1.1\">// Bad facts, good entailment (\u201cFT\u201d):</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T2.1.13.13.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T2.1.13.13.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.14.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.14.14.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1: armor is made of wood</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.14.14.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.14.14.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.14.14.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.15.15\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.15.15.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P2: wood conducts electricity</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.15.15.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.15.15.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.15.15.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.16.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.16.16.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">H: armor conducts electricity</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.16.16.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.16.16.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.16.16.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.17.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.17.17.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1 & P2 entails H</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.17.17.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.17.17.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.17.17.3.1\" style=\"color:#228B22;\">T</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.18.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.18.18.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1: armor is made of metal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.18.18.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.18.18.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.18.18.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.19.19\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.19.19.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P2: metal conducts water</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.19.19.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.19.19.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.19.19.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.20.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.20.20.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">H: armor conducts water</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.20.20.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.20.20.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.20.20.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.21.21\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.21.21.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1 & P2 entails H</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.21.21.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.21.21.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.21.21.3.1\" style=\"color:#228B22;\">T</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.22.22\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T2.1.22.22.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T2.1.22.22.1.1\">// Bad facts, bad entailment (\u201cFF\u201d):</span></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T2.1.22.22.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S2.T2.1.22.22.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.23.23\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.23.23.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1: armor is made of wood</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.23.23.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.23.23.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.23.23.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.24.24\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.24.24.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P2: wood conducts heat</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.24.24.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.24.24.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.24.24.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.25.25\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.25.25.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">H: armor conducts electricity</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.25.25.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.25.25.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.25.25.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.26.26\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.26.26.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1 & P2 entails H</td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.26.26.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.26.26.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.26.26.3.1\" style=\"color:#228B22;\">F</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.27.27\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.27.27.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1: armor is made of metal</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.27.27.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.27.27.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.27.27.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.28.28\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.28.28.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P2: metal conducts electricity</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.28.28.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.28.28.2.1\" style=\"color:#228B22;\">T</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.28.28.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.29.29\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S2.T2.1.29.29.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">H: armor conducts water</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T2.1.29.29.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.29.29.2.1\" style=\"color:#228B22;\">F</span></td>\n<td class=\"ltx_td ltx_border_r\" id=\"S2.T2.1.29.29.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T2.1.30.30\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S2.T2.1.30.30.1\" style=\"padding-left:2.0pt;padding-right:2.0pt;\">P1 & P2 entails H</td>\n<td class=\"ltx_td ltx_border_b ltx_border_r\" id=\"S2.T2.1.30.30.2\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S2.T2.1.30.30.3\" style=\"padding-left:2.0pt;padding-right:2.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T2.1.30.30.3.1\" style=\"color:#228B22;\">F</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Four different types of rule in the dataset. \u201cBad facts\u201d is when at least one of {P1,P2,H}\nis false in the real world.\nA \u201cbad\u201d entailment is one where the conclusion does not reasonably follow from the conditions given.\n</figcaption>\n</figure>",
|
| 111 |
+
"capture": "Table 2: Four different types of rule in the dataset. \u201cBad facts\u201d is when at least one of {P1,P2,H}\nis false in the real world.\nA \u201cbad\u201d entailment is one where the conclusion does not reasonably follow from the conditions given.\n"
|
| 112 |
+
},
|
| 113 |
+
"3": {
|
| 114 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S2.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T3.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row\" id=\"S2.T3.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S2.T3.1.1.1.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l\" colspan=\"2\" id=\"S2.T3.1.1.1.3\">All facts good?</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.1.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row\" id=\"S2.T3.1.2.2.1\"></th>\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S2.T3.1.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T3.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T3.1.2.2.3.1\">F*</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S2.T3.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T3.1.2.2.4.1\">T*</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T3.1.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S2.T3.1.3.1.1\">Entailment</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S2.T3.1.3.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T3.1.3.1.2.1\">*F</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.1.3.1.3\">672 (<span class=\"ltx_text ltx_font_bold\" id=\"S2.T3.1.3.1.3.1\">FF</span>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T3.1.3.1.4\">541 (<span class=\"ltx_text ltx_font_bold\" id=\"S2.T3.1.3.1.4.1\">TF</span>)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T3.1.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S2.T3.1.4.2.1\">valid?</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S2.T3.1.4.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T3.1.4.2.2.1\">*T</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.1.4.2.3\">609 (<span class=\"ltx_text ltx_font_bold\" id=\"S2.T3.1.4.2.3.1\">FT</span>)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T3.1.4.2.4\">1178 (<span class=\"ltx_text ltx_font_bold\" id=\"S2.T3.1.4.2.4.1\">TT</span>)</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Distribution of entailments among the four types (Figure\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.07527v2#S2.T2\" title=\"Table 2 \u2023 Reasoning Consistency: \u2023 2.2 Metrics \u2023 2 BaRDa: The Belief and Reasoning Dataset \u2023 BaRDa: A Belief and Reasoning Dataset that Separates Factual Accuracy and Reasoning Ability\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>). </figcaption>\n</figure>",
|
| 115 |
+
"capture": "Table 3: Distribution of entailments among the four types (Figure\u00a02). "
|
| 116 |
+
},
|
| 117 |
+
"4": {
|
| 118 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T4.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T4.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S3.T4.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.1.2.1\">Factual</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.1.1.3.1\">Reasoning</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T4.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.2.2.1.1\">Model</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.2.2.2.1\">Accuracy</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.1.2.2.3.1\">Accuracy</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T4.1.3.3.1\">GPT3 (curie)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.3.3.2\">74.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T4.1.3.3.3\">63.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T4.1.4.4.1\">GPT3 (davinci)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.4.4.2\">80.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.4.4.3\">78.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T4.1.5.5.1\">GPT3.5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.5.5.2\">82.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.5.5.3\">71.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T4.1.6.6.1\">GPT4</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.6.6.2\">87.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T4.1.6.6.3\">79.2</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>In general, the more powerful models have higher factual and reasoning accuracy, with one exception: GPT3 (davinci) appears\nbetter at recognizing good entailments than GPT3.5. </figcaption>\n</figure>",
|
| 119 |
+
"capture": "Table 4: In general, the more powerful models have higher factual and reasoning accuracy, with one exception: GPT3 (davinci) appears\nbetter at recognizing good entailments than GPT3.5. "
|
| 120 |
+
},
|
| 121 |
+
"5": {
|
| 122 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T5.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T5.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S3.T5.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l\" colspan=\"2\" id=\"S3.T5.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.1.1.1.2.1\">Factual Accuracy</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S3.T5.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.1.2.2.1.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T5.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.1.2.2.2.1\">All</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T5.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.1.2.2.3.1\">Unanimous</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.1.3.3\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_r\" id=\"S3.T5.1.3.3.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T5.1.3.3.2\">(9000 exs)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S3.T5.1.3.3.3\">(3275 exs)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T5.1.4.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T5.1.4.1.1\">GPT3 (curie)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T5.1.4.1.2\">74.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T5.1.4.1.3\">84.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.1.5.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.1.5.2.1\">GPT3 (davinci)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.1.5.2.2\">80.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.1.5.2.3\">87.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.1.6.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.1.6.3.1\">GPT3.5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.1.6.3.2\">82.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.1.6.3.3\">88.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T5.1.7.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T5.1.7.4.1\">GPT4</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.1.7.4.2\">87.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T5.1.7.4.3\">91.9</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Factual accuracy on all statements, and the subset that are more \u201cclear cut\u201d cases (where all workers unanimously voted <span class=\"ltx_text ltx_font_bold\" id=\"S3.T5.3.1\">T</span>).\n</figcaption>\n</figure>",
|
| 123 |
+
"capture": "Table 5: Factual accuracy on all statements, and the subset that are more \u201cclear cut\u201d cases (where all workers unanimously voted T).\n"
|
| 124 |
+
},
|
| 125 |
+
"6": {
|
| 126 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T6\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T6.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T6.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S3.T6.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_l\" colspan=\"4\" id=\"S3.T6.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T6.1.1.1.2.1\">facts T*|F* + entails *T|*F</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T6.1.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S3.T6.1.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T6.1.2.2.2.1\">FF</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T6.1.2.2.3.1\">TF</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T6.1.2.2.4.1\">FT</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T6.1.2.2.5.1\">TT</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T6.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T6.1.3.3.1\">GPT3c</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T6.1.3.3.2\">17.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T6.1.3.3.3\">10.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T6.1.3.3.4\">96.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T6.1.3.3.5\">96.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T6.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T6.1.4.4.1\">GPT3</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.4.4.2\">81.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.4.4.3\">34.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.4.4.4\">83.4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.4.4.5\">93.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T6.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T6.1.5.5.1\">GPT3.5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.5.5.2\">84.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.5.5.3\">31.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.5.5.4\">58.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.5.5.5\">90.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T6.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T6.1.6.6.1\">GPT4</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.6.6.2\">90.0</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.6.6.3\">42.1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.6.6.4\">75.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T6.1.6.6.5\">92.1</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Reasoning accuracy by rule type. GPT3c is heavily biased to judge all entailments as valid (regardless of gold truth, *T or *F), while GPT4 is more discerning.\n</figcaption>\n</figure>",
|
| 127 |
+
"capture": "Table 6: Reasoning accuracy by rule type. GPT3c is heavily biased to judge all entailments as valid (regardless of gold truth, *T or *F), while GPT4 is more discerning.\n"
|
| 128 |
+
},
|
| 129 |
+
"7": {
|
| 130 |
+
"table_html": "<figure class=\"ltx_table\" id=\"S3.T7\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T7.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T7.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S3.T7.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T7.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T7.1.1.1.2.1\">Consistency</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T7.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T7.1.2.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T7.1.2.2.1.1\">Model</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T7.1.2.2.2\">% = (# p,h,e believed) /</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T7.1.3.3\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_r\" id=\"S3.T7.1.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T7.1.3.3.2\">\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 (# p,e believed)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T7.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T7.1.4.4.1\">GPT3 (curie)</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T7.1.4.4.2\">98.1 = 2598 / 2649</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T7.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T7.1.5.5.1\">GPT3 (davinci)</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T7.1.5.5.2\">92,1 = 1485 / 1613</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T7.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T7.1.6.6.1\">GPT3.5</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T7.1.6.6.2\">86.2 = 1115 / 1293</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T7.1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_r\" id=\"S3.T7.1.7.7.1\">GPT4</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T7.1.7.7.2\">93.1 = 1251 / 1344</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span>Consistency: A rule is self-inconsistent if it fires (premises p, entailment e believed true), thus implying h, but h is not believed.\n</figcaption>\n</figure>",
|
| 131 |
+
"capture": "Table 7: Consistency: A rule is self-inconsistent if it fires (premises p, entailment e believed true), thus implying h, but h is not believed.\n"
|
| 132 |
+
}
|
| 133 |
+
},
|
| 134 |
+
"image_paths": {},
|
| 135 |
+
"validation": true,
|
| 136 |
+
"references": [
|
| 137 |
+
{
|
| 138 |
+
"1": {
|
| 139 |
+
"title": "Think you have solved question answering? try arc, the ai2 reasoning\nchallenge.",
|
| 140 |
+
"author": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa\nSchoenick, and Oyvind Tafjord. 2018.",
|
| 141 |
+
"venue": "ArXiv, abs/1803.05457.",
|
| 142 |
+
"url": null
|
| 143 |
+
}
|
| 144 |
+
},
|
| 145 |
+
{
|
| 146 |
+
"2": {
|
| 147 |
+
"title": "The pascal\nrecognising textual entailment challenge.",
|
| 148 |
+
"author": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005.",
|
| 149 |
+
"venue": "In Machine Learning Challenges Workshop.",
|
| 150 |
+
"url": "https://api.semanticscholar.org/CorpusID:8587959"
|
| 151 |
+
}
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"3": {
|
| 155 |
+
"title": "Recognizing Textual Entailment: Models and Applications.",
|
| 156 |
+
"author": "Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013.",
|
| 157 |
+
"venue": "Morgan and Claypool.",
|
| 158 |
+
"url": null
|
| 159 |
+
}
|
| 160 |
+
},
|
| 161 |
+
{
|
| 162 |
+
"4": {
|
| 163 |
+
"title": "Explaining answers with entailment trees.",
|
| 164 |
+
"author": "Bhavana Dalvi, Peter Alexander Jansen, Oyvind Tafjord, Zhengnan Xie, Hannah\nSmith, Leighanna Pipatanangkura, and Peter Clark. 2021.",
|
| 165 |
+
"venue": "In EMNLP.",
|
| 166 |
+
"url": null
|
| 167 |
+
}
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"5": {
|
| 171 |
+
"title": "Truthful ai: Developing and governing ai that does not lie.",
|
| 172 |
+
"author": "Owain Evans, Owen Cotton-Barratt, Lukas Finnveden, Adam Bales, Avital Balwit,\nPeter Wills, Luca Righetti, and William Saunders. 2021.",
|
| 173 |
+
"venue": "arXiv, abs/2110.06674.",
|
| 174 |
+
"url": null
|
| 175 |
+
}
|
| 176 |
+
},
|
| 177 |
+
{
|
| 178 |
+
"6": {
|
| 179 |
+
"title": "A framework for\nfew-shot language model evaluation.",
|
| 180 |
+
"author": "Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles\nFoster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff,\nJason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang,\nand Andy Zou. 2021.",
|
| 181 |
+
"venue": "Https://github.com/EleutherAI/lm-evaluation-harness.",
|
| 182 |
+
"url": "https://doi.org/10.5281/zenodo.5371628"
|
| 183 |
+
}
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"7": {
|
| 187 |
+
"title": "Qasc: A dataset for question answering via sentence composition.",
|
| 188 |
+
"author": "Tushar Khot, Peter Clark, Michal Guerquin, Peter Alexander Jansen, and Ashish\nSabharwal. 2019.",
|
| 189 |
+
"venue": "In AAAI Conference on Artificial Intelligence.",
|
| 190 |
+
"url": null
|
| 191 |
+
}
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"8": {
|
| 195 |
+
"title": "A logic-driven framework for consistency of neural models.",
|
| 196 |
+
"author": "Tao Li, Vivek Gupta, Maitrey Mehta, and Vivek Srikumar. 2019.",
|
| 197 |
+
"venue": "In EMNLP.",
|
| 198 |
+
"url": null
|
| 199 |
+
}
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"9": {
|
| 203 |
+
"title": "Holistic evaluation of language models.",
|
| 204 |
+
"author": "Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu,\nMichihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar,\nBenjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove,\nChristopher D. Manning, Christopher R\u2019e, Diana Acosta-Navas, Drew A. Hudson,\nE. Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao,\nJue Wang, Keshav Santhanam, Laurel J. Orr, Lucia Zheng, Mert Yuksekgonul,\nMirac Suzgun, Nathan S. Kim, Neel Guha, Niladri S. Chatterji, Omar Khattab,\nPeter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar,\nSurya Ganguli, Tatsunori Hashimoto, Thomas F. Icard, Tianyi Zhang, Vishrav\nChaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta\nKoreeda. 2022.",
|
| 205 |
+
"venue": "Annals of the New York Academy of Sciences.",
|
| 206 |
+
"url": null
|
| 207 |
+
}
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"10": {
|
| 211 |
+
"title": "Gpt-4 technical report.",
|
| 212 |
+
"author": "OpenAI. 2023.",
|
| 213 |
+
"venue": "ArXiv, abs/2303.08774.",
|
| 214 |
+
"url": null
|
| 215 |
+
}
|
| 216 |
+
},
|
| 217 |
+
{
|
| 218 |
+
"11": {
|
| 219 |
+
"title": "Belief.",
|
| 220 |
+
"author": "Eric Schwitzgebel. 2019.",
|
| 221 |
+
"venue": "Stanford Encyclopedia of Philosophy.",
|
| 222 |
+
"url": null
|
| 223 |
+
}
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"12": {
|
| 227 |
+
"title": "Entailer: Answering questions with faithful and truthful chains of\nreasoning.",
|
| 228 |
+
"author": "Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2022.",
|
| 229 |
+
"venue": "In EMNLP.",
|
| 230 |
+
"url": null
|
| 231 |
+
}
|
| 232 |
+
}
|
| 233 |
+
],
|
| 234 |
+
"url": "http://arxiv.org/html/2312.07527v2"
|
| 235 |
+
}
|
20240323/2312.14481v2.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
20240323/2401.08154v3.json
ADDED
|
@@ -0,0 +1,225 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"title": "TLIC: Learned Image Compression with ROI-Weighted Distortion and Bit Allocation",
|
| 3 |
+
"abstract": "This short paper describes our method for the track of image compression. To achieve better perceptual quality, we use the adversarial loss to generate realistic textures, use region of interest (ROI) mask to guide the bit allocation for different regions. Our Team name is TLIC.",
|
| 4 |
+
"sections": [
|
| 5 |
+
{
|
| 6 |
+
"section_id": "1",
|
| 7 |
+
"parent_section_id": null,
|
| 8 |
+
"section_name": "Introduction",
|
| 9 |
+
"text": "Learned image compression [1 ###reference_b1###] becomes an active area in recent years.\nSome of the models [2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###] have outperformed latest non-learned codec VVC. Most of the methods focus on\nnon-perceptual metric optimization. To improve perceptual quality,\nsome methods [7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###] employ Generative Adversial Networks (GANs) [10 ###reference_b10###] to generate perceptual\ntextures. In addition, VGG[11 ###reference_b11###] or LPIPS[12 ###reference_b12###]-based learned metrics are employed to help convergence.\nSince people have different standards for subjective criterion, for some content sensitive images, such as faces, documents, keeping its authenticity is more important than generating vivid but fake texture. To this end, in this paper,\nbased on the generative image compression framework [8 ###reference_b8###],\nwe propose Learned Image Compression with ROI-Weighted Distortion and Bit Allocation, named TLIC. We employ ROI to adjust the weight of the symbols in the latent to allocate more bits to regions of interest.\nIn addition, to enhance the guidance, we employ roi mask to control the weight of distortion of each pixels.\nAt last, to satisfy target bits, our model is trained with a variable rate compression method inspired from [13 ###reference_b13###]."
|
| 10 |
+
},
|
| 11 |
+
{
|
| 12 |
+
"section_id": "2",
|
| 13 |
+
"parent_section_id": null,
|
| 14 |
+
"section_name": "Method",
|
| 15 |
+
"text": ""
|
| 16 |
+
},
|
| 17 |
+
{
|
| 18 |
+
"section_id": "2.1",
|
| 19 |
+
"parent_section_id": "2",
|
| 20 |
+
"section_name": "Overview",
|
| 21 |
+
"text": "In this section, we briefly introduce our method for perceptual image compression.\nThe architecture of our TLIC is similar to Ma et al. [14 ###reference_b14###].\nThe gain and inverse gain units [13 ###reference_b13###] are employed for variable rate control.\nThe optimization process of our TLIC contains two stages. In the first stage,\nthe model is optimized for mean-square-error (MSE) with gain and inverse gain units.\nDuring the sencond stage, the model is optimized for perceptual quality.\nFor perceptual quality optimization, we adopt adversarial loss [10 ###reference_b10###]\n[7 ###reference_b7###][8 ###reference_b8###] to guide the decoder to generate realistic textures at low bit-rate. Our discriminator employs the Unet [15 ###reference_b15###] architecture. In addition to adversarial loss, we used L1 loss to keep the texture sharp. LPIPS loss [12 ###reference_b12###] and Style loss [16 ###reference_b16###] are also used to enhance the quality. We also use the Laplacian loss [17 ###reference_b17###] to reduced color variation. Following [14 ###reference_b14###], we use a region of interest (ROI) mask to guide the network to allocate more bits in regions of interest. The overall loss function is\nwhere is the rate, is the MSE distortion, is the L1 distortion, is the VGG-16 [11 ###reference_b11###] Lpips [12 ###reference_b12###] distortion, is the style loss [16 ###reference_b16###], is the Laplician distortion [17 ###reference_b17###], is the BCE adversarial distortion, is the ROI map. We use to adjust the weight of each loss."
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"section_id": "2.2",
|
| 25 |
+
"parent_section_id": "2",
|
| 26 |
+
"section_name": "ROI-Weighted Distortion and Bit Allocation",
|
| 27 |
+
"text": "In our method, following Ma et al. [14 ###reference_b14###], we employ saliency map as the ROI map because the\nsaliency detection can distinguish the image into the focused area and background, which is more suitable to our strategy.\nWe employ the RMformer [18 ###reference_b18###] to generate ROI maps.\nThe RMformer is fixed during training and testing. The process is formulated as:\nwhere is the input image, is the detected ROI map.\nTo\nThe is further pooled to smooth the boundaries for smooth bit-allocation.\nwhere is the smoothed ROI map.\nIn addition, we use to control the weight of the ROI in terms of rate allocation. What\u2019s more, we protect a certain number of channels to retain appropriate information for the background to avoid the fading of its reconstructed texture.\nFinally, the bit-allocation is achieved at the encoder side and can be formulated as:\nwhere is the analysis transform, is the synthesis transform.\nTo guide the model to allocate more bits on ROI regions and L1 and MSE are pixel-wise losses, we directly use the ROI map to adjust the weight of each pixel."
|
| 28 |
+
},
|
| 29 |
+
{
|
| 30 |
+
"section_id": "2.3",
|
| 31 |
+
"parent_section_id": "2",
|
| 32 |
+
"section_name": "Adversial Training",
|
| 33 |
+
"text": "To generate realistic textures, we employ the adversarial loss to optimize the model. Following existing methods [8 ###reference_b8###, 14 ###reference_b14###],\nwe employ BCE loss. The loss function is formulated as:\nwhere is the discriminator. is employed to optimize the descriminator.\nWe employ the U-net discriminator [15 ###reference_b15###] for more accurate pixel-wise feedback while maintaining syntax feedback."
|
| 34 |
+
},
|
| 35 |
+
{
|
| 36 |
+
"section_id": "2.4",
|
| 37 |
+
"parent_section_id": "2",
|
| 38 |
+
"section_name": "Variable Rate Adaptation",
|
| 39 |
+
"text": "The gain units and inverse gain units are employed for continuous rate adaptation.\nSpecifically, the model is trained for target bit-rates. The gain units is employed to adjust the quantization step of each channel,where is the channel number of each channel. The inverse gain is in our model.\nThe process in Equation 4 ###reference_### becomes"
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"section_id": "2.5",
|
| 43 |
+
"parent_section_id": "2",
|
| 44 |
+
"section_name": "Entropy Model",
|
| 45 |
+
"text": "The entropy model is a simplified version of recent linear complexity multi-reference entropy model [5 ###reference_b5###].\nThe latent residual prediction modules [19 ###reference_b19###] and\nthe inter-slice global context modules are removed\nto reduce model parameters and we employ one layer convolution-based local context modules. The number of slices are set to ."
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"section_id": "2.6",
|
| 49 |
+
"parent_section_id": "2",
|
| 50 |
+
"section_name": "Training",
|
| 51 |
+
"text": "To\naccelerate the training, patches are employed during training and\nthe batch size is set ."
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"section_id": "3",
|
| 55 |
+
"parent_section_id": null,
|
| 56 |
+
"section_name": "Conclusion",
|
| 57 |
+
"text": "In this report, we propose to employ the ROI-weighted rate allocation and distortion for better perceptual quality. To enhance the pixel feedback and syntax feedback of adversarial optimization, we employ the U-net based discriminator architecture. To achieve the target bit-rate, we employ the gain units and inverse gain units."
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"section_id": "4",
|
| 61 |
+
"parent_section_id": null,
|
| 62 |
+
"section_name": "References",
|
| 63 |
+
"text": ""
|
| 64 |
+
}
|
| 65 |
+
],
|
| 66 |
+
"appendix": [],
|
| 67 |
+
"tables": {},
|
| 68 |
+
"image_paths": {},
|
| 69 |
+
"validation": true,
|
| 70 |
+
"references": [
|
| 71 |
+
{
|
| 72 |
+
"1": {
|
| 73 |
+
"title": "\u201cVariational image compression with a scale hyperprior,\u201d",
|
| 74 |
+
"author": "Johannes Ball\u00e9, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston,",
|
| 75 |
+
"venue": "in Int. Conf. on Learning Representations, 2018.",
|
| 76 |
+
"url": null
|
| 77 |
+
}
|
| 78 |
+
},
|
| 79 |
+
{
|
| 80 |
+
"2": {
|
| 81 |
+
"title": "\u201cLearned image compression with discretized gaussian mixture likelihoods and attention modules,\u201d",
|
| 82 |
+
"author": "Zhengxue Cheng, Heming Sun, Masaru Takeuchi, and Jiro Katto,",
|
| 83 |
+
"venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.",
|
| 84 |
+
"url": null
|
| 85 |
+
}
|
| 86 |
+
},
|
| 87 |
+
{
|
| 88 |
+
"3": {
|
| 89 |
+
"title": "\u201cElic: Efficient learned image compression with unevenly grouped space-channel contextual adaptive coding,\u201d",
|
| 90 |
+
"author": "Dailan He, Ziming Yang, Weikun Peng, Rui Ma, Hongwei Qin, and Yan Wang,",
|
| 91 |
+
"venue": "arXiv preprint arXiv:2203.10886, 2022.",
|
| 92 |
+
"url": null
|
| 93 |
+
}
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"4": {
|
| 97 |
+
"title": "\u201cMlic: Multi-reference entropy model for learned image compression,\u201d",
|
| 98 |
+
"author": "Wei Jiang, Jiayu Yang, Yongqi Zhai, Peirong Ning, Feng Gao, and Ronggang Wang,",
|
| 99 |
+
"venue": "in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 7618\u20137627.",
|
| 100 |
+
"url": null
|
| 101 |
+
}
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"5": {
|
| 105 |
+
"title": "\u201cMlic++: Linear complexity multi-reference entropy modeling for learned image compression,\u201d",
|
| 106 |
+
"author": "Wei Jiang and Ronggang Wang,",
|
| 107 |
+
"venue": "arXiv preprint arXiv:2307.15421, 2023.",
|
| 108 |
+
"url": null
|
| 109 |
+
}
|
| 110 |
+
},
|
| 111 |
+
{
|
| 112 |
+
"6": {
|
| 113 |
+
"title": "\u201cSlic: Self-conditioned adaptive transform with large-scale receptive fields for learned image compression,\u201d",
|
| 114 |
+
"author": "Wei Jiang, Peirong Ning, and Ronggang Wang,",
|
| 115 |
+
"venue": "arXiv preprint arXiv:2304.09571, 2023.",
|
| 116 |
+
"url": null
|
| 117 |
+
}
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"7": {
|
| 121 |
+
"title": "\u201cGenerative adversarial networks for extreme learned image compression,\u201d",
|
| 122 |
+
"author": "Eirikur Agustsson, Michael Tschannen, Fabian Mentzer, Radu Timofte, and Luc Van Gool,",
|
| 123 |
+
"venue": "in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 221\u2013231.",
|
| 124 |
+
"url": null
|
| 125 |
+
}
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"8": {
|
| 129 |
+
"title": "\u201cHigh-fidelity generative image compression,\u201d",
|
| 130 |
+
"author": "Fabian Mentzer, George D Toderici, Michael Tschannen, and Eirikur Agustsson,",
|
| 131 |
+
"venue": "Advances in Neural Information Processing Systems, vol. 33, pp. 11913\u201311924, 2020.",
|
| 132 |
+
"url": null
|
| 133 |
+
}
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"9": {
|
| 137 |
+
"title": "\u201cMulti-realism image compression with a conditional generator,\u201d",
|
| 138 |
+
"author": "Eirikur Agustsson, David Minnen, George Toderici, and Fabian Mentzer,",
|
| 139 |
+
"venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22324\u201322333.",
|
| 140 |
+
"url": null
|
| 141 |
+
}
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"10": {
|
| 145 |
+
"title": "\u201cGenerative adversarial nets,\u201d",
|
| 146 |
+
"author": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio,",
|
| 147 |
+
"venue": "Advances in neural information processing systems, vol. 27, 2014.",
|
| 148 |
+
"url": null
|
| 149 |
+
}
|
| 150 |
+
},
|
| 151 |
+
{
|
| 152 |
+
"11": {
|
| 153 |
+
"title": "\u201cVery deep convolutional networks for large-scale image recognition,\u201d",
|
| 154 |
+
"author": "Karen Simonyan and Andrew Zisserman,",
|
| 155 |
+
"venue": "arXiv preprint arXiv:1409.1556, 2014.",
|
| 156 |
+
"url": null
|
| 157 |
+
}
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"12": {
|
| 161 |
+
"title": "\u201cThe unreasonable effectiveness of deep features as a perceptual metric,\u201d",
|
| 162 |
+
"author": "Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang,",
|
| 163 |
+
"venue": "in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 586\u2013595.",
|
| 164 |
+
"url": null
|
| 165 |
+
}
|
| 166 |
+
},
|
| 167 |
+
{
|
| 168 |
+
"13": {
|
| 169 |
+
"title": "\u201cAsymmetric gained deep image compression with continuous rate adaptation,\u201d",
|
| 170 |
+
"author": "Ze Cui, Jing Wang, Shangyin Gao, Tiansheng Guo, Yihui Feng, and Bo Bai,",
|
| 171 |
+
"venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10532\u201310541.",
|
| 172 |
+
"url": null
|
| 173 |
+
}
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"14": {
|
| 177 |
+
"title": "\u201cVariable rate roi image compression optimized for visual quality,\u201d",
|
| 178 |
+
"author": "Yi Ma, Yongqi Zhai, Chunhui Yang, Jiayu Yang, Ruofan Wang, Jing Zhou, Kai Li, Ying Chen, and Ronggang Wang,",
|
| 179 |
+
"venue": "in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 1936\u20131940.",
|
| 180 |
+
"url": null
|
| 181 |
+
}
|
| 182 |
+
},
|
| 183 |
+
{
|
| 184 |
+
"15": {
|
| 185 |
+
"title": "\u201cU-net: Convolutional networks for biomedical image segmentation,\u201d",
|
| 186 |
+
"author": "Olaf Ronneberger, Philipp Fischer, and Thomas Brox,",
|
| 187 |
+
"venue": "in Medical Image Computing and Computer-Assisted Intervention\u2013MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 2015, pp. 234\u2013241.",
|
| 188 |
+
"url": null
|
| 189 |
+
}
|
| 190 |
+
},
|
| 191 |
+
{
|
| 192 |
+
"16": {
|
| 193 |
+
"title": "\u201cImage style transfer using convolutional neural networks,\u201d",
|
| 194 |
+
"author": "Leon A Gatys, Alexander S Ecker, and Matthias Bethge,",
|
| 195 |
+
"venue": "in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2414\u20132423.",
|
| 196 |
+
"url": null
|
| 197 |
+
}
|
| 198 |
+
},
|
| 199 |
+
{
|
| 200 |
+
"17": {
|
| 201 |
+
"title": "\u201cContext-aware synthesis for video frame interpolation,\u201d",
|
| 202 |
+
"author": "Simon Niklaus and Feng Liu,",
|
| 203 |
+
"venue": "in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 1701\u20131710.",
|
| 204 |
+
"url": null
|
| 205 |
+
}
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"18": {
|
| 209 |
+
"title": "\u201cRecurrent multi-scale transformer for high-resolution salient object detection,\u201d",
|
| 210 |
+
"author": "Xinhao Deng, Pingping Zhang, Wei Liu, and Huchuan Lu,",
|
| 211 |
+
"venue": "in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 7413\u20137423.",
|
| 212 |
+
"url": null
|
| 213 |
+
}
|
| 214 |
+
},
|
| 215 |
+
{
|
| 216 |
+
"19": {
|
| 217 |
+
"title": "\u201cChannel-wise autoregressive entropy models for learned image compression,\u201d",
|
| 218 |
+
"author": "David Minnen and Saurabh Singh,",
|
| 219 |
+
"venue": "in 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020, pp. 3339\u20133343.",
|
| 220 |
+
"url": null
|
| 221 |
+
}
|
| 222 |
+
}
|
| 223 |
+
],
|
| 224 |
+
"url": "http://arxiv.org/html/2401.08154v3"
|
| 225 |
+
}
|
20240323/2401.08503v3.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|